Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 12 - 3 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.
You're watching TVPN. Today is Tuesday, 04/01/2025. We are live from the Temple Of Technology, the fortress of finance. Jordy, the capital of capital. This show starts now.
Speaker 1:We got a ton of news. OpenAI is worth $300,000,000,000 now. Hermes is going back to their roots and focusing on horse saddles, both equally important stories in the world of technology journalism. We're gonna take you through it today. And we have a bunch of great guests for you.
Speaker 1:We got David Senra coming on the show. Samir from Colin and Samir, we're gonna talk about creator economy. He just did a fantastic interview with mister Beast and Mark Zuckerberg putting two of the greats together. Know? From there.
Speaker 1:And and it's OpenAI day. They're raising 40,000,000,000 at 300. We got two folks from OpenAI joining the show, Will and Aidan. So we'll talk to them. Probably not about the financing, probably about the product and all the technology that they're building, but we're very excited for that.
Speaker 2:Anyway We'll be ranking their top 5,000 Ghibli memes.
Speaker 1:Yes. And we'll be pressuring them into buying watches on Bezel.
Speaker 2:That's right.
Speaker 1:Undoubtedly. Anyway, let's kick it off. Do you have any breaking news? I think we should address. We are officially at 32,000 followers.
Speaker 1:Right?
Speaker 2:We have doubled it. Doubling. Another doubling. Another doubling.
Speaker 1:The show is bigger than ever. We're taking over. So we're at 32.3 now. And that means that it's a Dom Perignon episode technically, but we are shifting our strategy. So these Dom Perignon episodes now are in partnership with Zbiotics, one of my friend's companies, actually.
Speaker 1:They are not an official sponsor, but I do love Zbiotics. And Zbiotics I actually introduced the founders. Stephen Lam, I went to high school with. I met Zach, the CEO Fantastic. Through YC.
Speaker 1:I think he did the early YC, like, the startup school, sent out an email. Hey. I'm looking for people, and and Steven was looking for a cofounder wanting to get in the startup community. Kinda introduced them, and they hit it off. And they've been running this company for, like, half a decade now.
Speaker 1:It's great.
Speaker 2:Crazy.
Speaker 1:Almost a decade. I guess they went through YC same
Speaker 2:And people are really obsessed with this product.
Speaker 1:Oh, yeah. Yeah. So I swear by it. If you're not familiar, Zbiotics is a pre alcohol probiotic drink designed specifically to kill hangovers, basically. It was hugely popular at weddings.
Speaker 1:People buy these and they hand them out to all their guests. You take them. It really it it can't make up for a lack of sleep, so you will be tired. But it really does a great job of, like, killing that, like, terrible feeling.
Speaker 2:No. It's great. And if we were gonna be drinking this bottle of champagne today, it would be perfect because it's the middle of the day. You don't wanna go home at the end of the day with a hangover.
Speaker 1:Yeah. Right? Yeah. And so we we we we wanna shift from from drinking the Dom Perignon on stream to giving away bottles of Dom Perignon.
Speaker 2:And this this Dom actually has a a sponsor itself, Slow Ventures.
Speaker 1:Oh, yes. Slow Ventures.
Speaker 2:They sent it
Speaker 1:to us. They sent this to us.
Speaker 2:Good friends of the show.
Speaker 1:So thank you
Speaker 3:to Slow.
Speaker 1:We're gonna pass it on. Well, I'll write it.
Speaker 2:To a founder. Yeah. That's that's kind of Yeah. It's kinda regifting can, you know, be a little bit taboo. But in this case, it's really a celebration.
Speaker 1:I think you're a founder. You listen to this you listen to the show. You win the bottle of Dom Perignon. And then you send this to Sam Lassen. Yeah.
Speaker 1:Yeah. It'd be like, hey. I wanna pitch you. And it's just a snake eating his tail at Ouroboros of Dom Perrigo.
Speaker 2:What do we
Speaker 1:This could be the most story boggled.
Speaker 2:How are we gonna give it away?
Speaker 1:I wanted to see a a super creative five star review on Apple Podcasters Spotify. I I think that's fun. If you've already left one, get a friend's phone. Leave another one.
Speaker 2:That's right.
Speaker 1:Screenshot it. Send it to us. Just tweet it at us. And the one that we think is the most funny, we will we'll we'll get in touch and mail this to you. How about that?
Speaker 2:Boom. Perfect.
Speaker 1:You gotta be over 21, obviously. And Ag TVP. It's kind of a contest, so I guess there's no no cost to enter. No purchase necessary. Yeah.
Speaker 1:But I guess this isn't a purchase anyway. So Yeah. I don't know. Just please don't sue us. Like, we're just trying to give out a bottle of dome.
Speaker 1:It's fun. I don't know. Beep people get worried about these, like, contests. Like, they need to be, like, regulated and stuff. Like, I I I I buy it.
Speaker 1:I want to be a good Samaritan.
Speaker 2:They're not regulating DOM giveaways yet.
Speaker 1:They're not not yet. But if we get any bigger, they might. Anyway, let's go through the, the the the SoftBank deal, OpenAI. Masayoshi Son got the deal done. He's putting in $10,000,000,000.
Speaker 2:Our leverage king.
Speaker 1:Leverage king. Yeah. And people are like, oh, he's using debt. So the Wall Street Journal has this article, this this hit piece against debt, which we love here. They say, how is SoftBank funding its mega investment in OpenAI?
Speaker 1:A lot of debt. Like, that's a bad thing. Baby, he's going long. He's going turbo long. We love it.
Speaker 1:We love to see it. Masayoshi Son's company said the first ten billion would be financed by borrowing from Japanese bank Mizuho and other lenders. Of course, we've seen this stuff before, and SoftBank a lot of assets, so it actually makes sense. But we'll talk about how they're collateralizing it in a little bit. What you got?
Speaker 2:No. I was going to say, so to be clear, this is a staged investment.
Speaker 1:Yes.
Speaker 2:They're doing $10,000,000,000 now.
Speaker 1:And 30,000,000,000 in
Speaker 2:2020 Seemingly entirely from Mizuho Yeah. And a and a handful of other lenders to be completed this month with the remaining 30 to come by the beginning of next year. So Masa is going to be under this guy likes to put himself under pressure. He's going be under pressure for the rest of the year knowing he's got a has this sort of $30,000,000,000 sort of investment that he needs to make.
Speaker 1:Yeah.
Speaker 2:That he needs to come up with some combination of debt and equity.
Speaker 1:It is funny because couldn't OpenAI have just gone to Mizuho directly and gotten debt if they wanted debt? But now they have equity on their preferred equity on their balance sheet. Yeah. Yeah. It's amazing.
Speaker 1:It's great
Speaker 2:for OpenAI.
Speaker 1:Potentially, everyone wins. Right? Like, OpenAI does really well, Mizuho gets paid back. They want interest. They want to be paid back.
Speaker 1:Masa sees the upside, and OpenAI doesn't have any debt in their on their balance sheet. It's like Yeah. It it's beautiful. This is capitalism, baby. I love it.
Speaker 2:Rian, to be clear, to give Masa some credit, I couldn't he fund this entirely by selling off some of his arm For sure. Things? For sure.
Speaker 1:Yeah. So But this deal is the largest startup largest investment ever in a startup. I knew we we seriously need a bigger gong. Hold on to that gong. There we go.
Speaker 1:We gotta hit the gong hard for this one. This is a big deal.
Speaker 2:Whatever anybody has to say about Sam, the guy knows how to do deals.
Speaker 1:This was the whole thesis of OpenAI from day one. It's like Greg Brockman, the CTO of Stripe, Sam Altman. He's invested in a billion things, made so much money. It's like the, like, the founding team was really, really, like, incredible. And if your if your whole thesis is just, like, bet on founders, like, Sam, pretty good at making money.
Speaker 1:Like, it's not that crazy.
Speaker 2:This is your this has been this is like I feel like the the your favorite investments are always into companies where the the founders just, like, you can tell, like, they're good at making money.
Speaker 1:He's commercial. He's a he he's commercial. Like, he doesn't he doesn't like losing money, and that's I think that's great, and I think that's exactly what you want. And that's why I'm so excited about not being a nonprofit anymore. It like, it is clearly, as a consumer tech company, it needs to be a for profit.
Speaker 1:And, yeah, when you think about, like, could this be the next Google? Could this be the next like, ChatGPT for the last year has been on the home row of my app, on the home row of my iPhone. Yeah. And, like, it's not because, like, I've, like, have I'm shilling for Sam or something. It's just, it's a useful product.
Speaker 1:It needs to be there. And like, when I think about what else is in the home row, it's like Google, Apple, and OpenAI. And so, like, can it get to, you know, hundreds of billions of dollars of valuation? Like, sure. It seems fine.
Speaker 1:Yeah. But, I mean, I'm sure there'll be a lot of good takes about how where that could go wrong. Obviously, it is a very high valuation. Obviously, when Moss is buying, Sam Lesson says, you wanna be selling. Who knows?
Speaker 1:I'm I'm I'm just excited to see a lot of money change hands and a lot of debt stack up. Together with SoftBank's pledge to lead the $100,000,000,000 Stargate cloud computing initiative with OpenAI, the investment marks a massive bet on the artificial intelligence startup. It intertwines the fortunes of SoftBank with a company that expects to lose billions of dollars for years to come. The hope is that OpenAI emerges as the leader of the pack in a race to spread artificial intelligence throughout society and commerce, a market that many believe could be worth trillions of dollars a year. This is what Sam was saying about, like, it's either worth 0 or a trillion.
Speaker 1:And so, like, you average that out and you're like, yeah, half a trillion.
Speaker 2:Yeah. And it's hard to do a discounted cash flow analysis of that binary of an outcome.
Speaker 1:I mean, is one of Peter Thiel's takes is like is like when when they were doing the DCF on Google, and Andrew Reid talked about this too. Like, they just they didn't have the spreadsheets go out far enough to understand that, oh, yeah. If you really play out this trend, like, Apple and Google, these companies could be throwing off, like like, hundreds of billions of dollars of cash a year. And they just would the numbers just kept going up. They never got to that.
Speaker 1:And then no matter how far you went out, if you discounted it back, it would have been a much bigger number, but no one was really thinking that long term. Now people are like
Speaker 2:And that's when tech decided to say, no thanks, finance. We're done. We're done with you. We're done with your models. Yep.
Speaker 2:It's purely Really
Speaker 1:vibe investing. And now and now tech has gone the other direction, which is like, oh, this thing isn't gonna make money for two decades, and then it'll make a trillion dollars. Like, sure. I'll pay you a trillion dollars right now. And Yeah.
Speaker 1:May maybe we've gone too far. Who knows? We we will see. But but good luck to everyone involved. And congratulations to all the all the OpenAI shareholders and employees who seem to be doing very well
Speaker 3:right now.
Speaker 2:Yes. Yeah. You have to imagine some a good amount of this is going to secondary, you know, even if it's a small percentage relative, even if it's 500,000,000 Yep. Still pretty meaningful for
Speaker 1:SoftBank always delivering great quotes. The statement says, the information revolution has now entered a new phase led by artificial intelligence and called its partner its OpenAI, its partner closest to achieving AGI in which computers operate on the level of humans. And Sam was tweeting about this. He's saying, like, AGI has been achieved externally, and it's unclear. I think he might just be April fools joking.
Speaker 1:I don't know. But, clearly, the product works. People like the images. People like the text responses. People like deep research.
Speaker 1:I like these products, and I think people will continue to use them. Yep. Obviously, it's extremely competitive, and there are several hundred billion dollar plus efforts now to win this war. Yep. And it's the Yeah.
Speaker 1:The time to be a podcaster in history.
Speaker 2:Yeah. The edge that OpenAI has is if you talk to people that aren't as terminally online as the rest of us, they will tell you I love Open AI. I don't just like it and use it. It they have a passion for it just as
Speaker 1:Yeah.
Speaker 2:Totally. Everyday consumers.
Speaker 1:Yeah. And they'll be like And it's hard to
Speaker 2:get them like, oh Helen, don't tell who? If you tell somebody it's like Open AI, they're gonna say like, but I love Open AI already.
Speaker 1:Yeah. Exactly. You're like, no. No. No.
Speaker 1:But it's two points better on MMLU. And they're like, I don't know what that is.
Speaker 2:This one won an IMO gold medal.
Speaker 1:Yeah. They're like,
Speaker 2:what? What's
Speaker 1:that? Yeah. Yeah. It is very funny. Softbank is taking a lot of risks for a piece of OpenAI.
Speaker 1:Ratings agency S and P Global said Tuesday that, said Tuesday that Softbank's financial condition will likely deteriorate as a result of the OpenAI investment and that it plans to add debt that could lead the agency to consider downgrading SoftBank's ratings. None of the startup none of the startups with early leads in generative AI have shown that they can operate profitably. I don't know how true that is. Like, you could probably look at the GPT four training run and amortize just that. But, yes, I mean, on a on a quarter to quarter basis, these companies aren't profitable.
Speaker 1:So in terms of, like, the overall business, it's a good point. Yeah. And the sector is pouring tens of billions of dollars into data centers based on assumptions not yet proven of a future in which AI rapidly permeates the globe. It feels like it already is rapidly permeating, but I guess it's up for debate. Early tech leaders often falter a point SoftBank learned when it made .com era bet that Yahoo would be the dominant force in search.
Speaker 1:The background debt has been a common feature of SOAN's risk heavy strategy. The CEO borrowed heavily for the company's successful acquisitions of Vodafone's Japanese unit and chip design company, Arm. More recently, SoftBank has been licking its wounds from piling tens of billions of dollars into startups just before values plunged in 2021, and Sony repeatedly said SoftBank would stay on defense. Now having pivoted to offense, SoftBank ramped up spending, including a 6,500,000,000 acquisition of chip startup, Ampere.
Speaker 2:Go back and listen to our deep dive on Masa and SoftBank in general. Fascinating. It's very fun and fascinating. I think what people fail to consider is that Masa is a gambler. Mhmm.
Speaker 2:And he's not oblivious
Speaker 4:to
Speaker 2:the risks that he takes, but he's constantly searching for that next absolute banger.
Speaker 1:Totally. And
Speaker 2:there's there's a very good you know, there's there's plenty of, like, you know, nontechnical, but powerful vibes based analysis that you could do to, you know, make the case that this could very well be a fantastic outcome for SoftBank.
Speaker 1:So It's one or
Speaker 4:the other.
Speaker 1:It's either a good or a bad investment. It's probably not the future.
Speaker 2:We talked to the experts, and, you know, we determined that it's it's gonna be either good or bad.
Speaker 1:Yeah. It's gonna be either good or bad. You heard it here first, folks. But it's gonna be extremely good if you participate in that secondary deal, and then you head over to getbezel.com and you shop for over 24,500 luxury watches. I can't imagine a more AGI resistant asset than a luxury watch.
Speaker 1:They're not making any more FPGeorns after
Speaker 2:And they're not making any more Rolexes Exactly. Or For our tax. No. I'm just saying Yeah. The actual companies themselves.
Speaker 2:Sure. If you want to make the next Rolex, you have to go back Yeah. Hundred plus years ago and start it. Yeah. At least if you wanna run it today.
Speaker 1:Yeah. Exactly.
Speaker 2:If you wanna run what could be the next Rolex, but not be alive for when it becomes Yes. Said said ass. Exactly. Godspeed. So yeah.
Speaker 2:But go to
Speaker 1:Getbezel.com. And can we just give a shout out to the graphics department? We're we're a small team here, but I think it's incredible what we can do. I mean, we're just shy of 75 people now, and they have really been over delivering. And I really love the new tickers.
Speaker 1:Of course, you know we're sponsored by Public. We have the Public stock ticker there, but we also got the Polymarket ticker. We hope you're enjoying staying up to date on what tech markets are are going on.
Speaker 2:Such a beautiful way to understand the world.
Speaker 1:It is.
Speaker 2:And we should actually get some bezel. We should get like A
Speaker 1:watch listings. I just want more tickers every week.
Speaker 2:We more tickers.
Speaker 1:Yeah. So buy I think every sponsor gets tickers
Speaker 2:just our eyes. Yeah. Yeah. Eventually, it's just 99% ticker.
Speaker 1:I think so.
Speaker 2:But to close it out, go to go to getbezel.com. Download the app. The app is fantastic. We are DAUs. And Seriously.
Speaker 2:Be kidding.
Speaker 1:And it's like the greatest time in history to be a Watch fan because it's, there's a major watch announcement. The watch world is exploding with new news.
Speaker 2:That's right.
Speaker 1:Rolex launched the Land Dweller. Yep. Patek launched a complications watch that's worth over a million dollars. AEP has just made some updates. A lot of things are launching.
Speaker 1:So I wanted to highlight this, Polo 79 in white gold from Piaget. It's about a hundred thousand dollar watch, but I think it's it's a lot of watch for that money.
Speaker 2:It certainly is.
Speaker 1:It's a lot of money, but it's a lot of watch.
Speaker 2:It's a lot of money, but it's lot of watch.
Speaker 1:And remember, I mean, you can you can drive a GT three RS, but you can't take a GT three RS into a board meeting.
Speaker 2:That's right. So That's right. Many people have said this.
Speaker 1:You can't take a can't take a nine eleven into a board meeting, but you can take your Piaget Piaget
Speaker 2:polo drive your nine to 20 into a board meeting, you will get arrested. Will get do
Speaker 1:it. Except, I think more more Sand Hill Road VC firms should have massive roll up doors for board meetings if you wanna drive. You should
Speaker 2:drive your
Speaker 1:car. I think you should be able to drive in. Like, a drive up, and you should just be able to roll down your window and be like, yeah, KPIs were good. Okay. Yeah.
Speaker 1:We'll be raising in Q3. Okay. We're going get out you next quarter.
Speaker 2:Well, So we're in hunt for a new studio.
Speaker 1:Yep. And Roll up doors are Roll up
Speaker 2:doors are like a deal breaker, but they're certainly Nice to have. Nice to have.
Speaker 1:For sure. It's nice to have. Anyway, we're gonna have Quaid from Bezel, the CEO on the show tomorrow to break down exactly what's happening in the watch world. I follow all this stuff Europe right now. And I have I noticed I have the worst takes because I I'll I'll see a new watch drop, I'll be like, this is amazing.
Speaker 1:I want this immediately. And then he'll be like, that's the most controversial watch that's ever launched. Everyone in the watch world hates it. Like, you are you have, like, the most contrarian position. I'm like, I'm kinda fine with that.
Speaker 1:Like, I don't really mind. Because I'm, like, dipping my toe in. I know the brands. I know the different models, but I'm not, like, you know, in the discourse.
Speaker 2:We're watch enjoyers. We're not watch experts. Exactly. Big difference.
Speaker 1:Yeah. So I would I would gladly wear a knockoff royal oak from Rolex called the Land Dweller.
Speaker 2:Yeah.
Speaker 1:I think it looks great.
Speaker 2:Anyway Looks great.
Speaker 1:Let's stay with OpenAI and move on to an an excerpt from a book that's coming out on OpenAI. There's a segment in the Wall
Speaker 2:Street Journal. It's a long different than Ashley Vance's
Speaker 1:It is.
Speaker 2:OpenAI book?
Speaker 1:It is. And this is an interesting I'll I'll give my take, the meta level. Like, this reporter, I don't remember who actually wrote this. It's from a book that's coming out in a few weeks. Adapted from it's called The Optimist, Sam Altman, OpenAI, and the Race to Invent the Future by Keetch Hagee or Hagee.
Speaker 1:It will be published on 05/20/2025, and I'm sure it'll be an interesting read. It is incredibly well sourced. So this this writer, this author was able to get insider accounts of, like, private dinners between Peter Thiel and Sam Allman. Like, very like, you've been to some of these parties. Like, people don't love talking to the press that go to those parties.
Speaker 1:Like, it's pretty rare.
Speaker 2:Well, and the way they're positioning this dinner just from the graphic is that it was actually just a dinner between the two of them.
Speaker 1:It it was a birthday party. There were other people there. But still, it's like, it's crazy that this, like, leaked at all. Yeah. So it's it's incredibly well sourced, and there's very interesting facts and quotes that bubble up in this story that we'll kind of go through today.
Speaker 1:But I think the analysis is god awful, and so, truth zone time. It's definitely truth zone. It's not even truth zone. Like, the the facts are correct. I think it's more just, like, the follow-up questions that are obvious to ask if you're in
Speaker 2:in the truth zone.
Speaker 1:Yeah. The takes are are, you know, need to be put in the truth zone. Anyway, let's read through this because it's a it's it's a beautifully written name. It's like a great, you know, great writer, great great author.
Speaker 2:I'll kick it off. Yeah. On a balmy mid November evening in 2023, billionaire venture capitalist Peter Thiel threw a birthday party for his husband at YES, an avant garde Japanese restaurant located in a century old converted bank building in Los Angeles's Arch District, not far from where we are at this moment. Seated next to him was his friend Sam Altman. Thiel had backed Altman's first venture fund more than a decade before and remained a mentor to the younger investor when Altman became the face of the artificial intelligence revolution as the chief executive of OpenAI.
Speaker 2:OpenAI's instantly viral launch of ChatGPT in November 2022 had propelled tech stocks to one of their best years in decades, yet Thiel was worried. Years before he met Altman, Thiel had taken another AI obsessed prodigy named Eliezer Udakowski under his wing, funding his institute which pushed to make sure that any AI smarter than humans would be friendly to its maker. That March, Yudakowsky had argued in Time Magazine that unless the current wave of AI research was halted, literally everyone on earth will die. You don't understand how Eliezer has programmed half the people in your company to believe in that stuff, Thiel warned Altman. You need to take this more seriously.
Speaker 2:You wanna take it over?
Speaker 1:Yeah. I mean, it's just a it's it's a fascinating history. I mean, Altman's first venture fund was called Hydrozine Capital, I believe. He was actually an investor in my first company. And hydrazine, I don't I haven't asked Sam if this is the reference, but hydrazine is rocket fuel.
Speaker 1:But it's also extremely toxic and will, like, kill you if you breathe it. Interesting. And so there's this weird I like to think that it's this weird, like, double entendre about, like, venture capital can accelerate your business, but it can also kill your It's kind of beautiful. Anyway, underreported, Sam is, like, so known for OpenAI now. I think people forget that he ran a very, very successful venture fund and made some incredible investments during that time.
Speaker 1:His. And then also what's missing from this is that Peter Thiel was is listed on Wikipedia as a cofounder of OpenAI and was one of the initial backers. Obviously, Elon was a major, major backer and so and then has fought, and there's all this, like, you know, controversy there. So, like, that has become the bigger story. But the initial team behind OpenAI was was crazy.
Speaker 1:I mean, there's a YC research project, it was heavily YC influenced and that and YC actually has a stake in OpenAI now that must be worth a ton now. Yeah. And PG famously like found out about that on Twitter because he was asking someone and they were and they were like Yeah. Don't you own a stake? And he was and he was like, yeah.
Speaker 1:I just asked legal, and we actually do. It was very funny. But, yeah, mean, they go way back. And at one point, Peter Thiel had been an adviser to YC, and Sam Altman was running YC. And Sam brought Peter in, and they did an I I think they did a podcast interview together, and they've talked about each other.
Speaker 1:So there's just a lot of, like, history here that's kind of interesting. So moving on. Altman picked at his vegetarian dish and tried not to roll his eyes. This was not the first dinner where Thiel had warned him that the company had been taken over by the EAs, by which he meant people who subscribed to effective altruism. EA had later had lately pivoted from trying to end global poverty, which was what we saw SBF doing in the EA thing.
Speaker 1:The whole idea was, like, mosquito nets. We gotta get mosquito nets because mosquito nets are super cheap, and they stop malaria. And so for for just, like, a dollar, you can save someone's life, and it's the highest ROI on reducing human suffering versus if you're dealing with, like, Alzheimer's treatments, like, older people, they're expensive, mosquito nets are really cheap. And so everyone EAs were originally very focused on ending global poverty, then they kind of pivoted to trying to prevent runaway AI from murdering humanity. Teal had repeatedly predicted that the AI safety people would destroy OpenAI.
Speaker 1:Well,
Speaker 5:it
Speaker 1:was kind of true with, it was kind of true of Elon, but we got rid of Elon. Sam responded at the dinner, referring to the messy 2018 split with his cofounder
Speaker 2:Elon They did not fully get rid of Elon.
Speaker 1:So true. Who once referred to the attempt to create artificial intelligence as summoning the demon. That's a great Elon quote. I love it. And and, I mean, it it's the whole nuance here, which is like like, everyone is somewhat AGI upheld, somewhat AGI doomer.
Speaker 1:No one has a PDUM of zero. No one has a PDUM of a hundred. Everyone's kind of on this gradient, but there's just a question of, like, how do you deal with that risk? And is it with Yep. More innovation and building the a more positive future, like the Thielian, like, optimist like, definite optimism and, hey, okay, we need to build an AI that doesn't kill us versus Yep.
Speaker 1:The kind of indefinite pessimism of LEAs or Yudkowsky, which is like, it's going to happen no matter what. There's nothing we can do. Nearly But Yeah.
Speaker 2:Give me money for research. Yes. Give me money for my institute.
Speaker 1:Yes. Nearly 800 OpenAI employees have been riding a rocket ship, and we're about to have the chance to buy beachfront second homes. Let's hear it for some beachfront second homes, folks. We love we love beachfront second homes on this Even
Speaker 2:if you're not in the position of those 800 OpenAI employees, you can go to wander.com.
Speaker 1:Yeah. And you can rent a beachfront second home. Fantastic.
Speaker 4:By the day.
Speaker 1:It's pretty Yeah.
Speaker 2:You can just get homes by the day.
Speaker 1:By the day.
Speaker 2:Fantastic innovation. Yes. Luxury homes by the day.
Speaker 1:By the day.
Speaker 2:Hyper fractionalized.
Speaker 1:Fractionalized. I knew you were gonna say it. And so there was tender offer going on at the time, valuing the company 86,000,000,000. There was no need to panic. So all the OpenAI employees are very happy.
Speaker 1:At least it seemed that they were. Altman, at 38 years old, was wrapping up the best year of a charmed career, a year in which he became a household name, met with presidents and prime ministers around the world, and most importantly, within the value system of Silicon Valley, delivered a new technology that seemed like it was very possibly going to change everything. And that really is true. Like, OpenAI yes. The transformer paper was written at Google.
Speaker 1:Yes. Like, other people were working on language models, but OpenAI was the company that that really went full scale pilled, scaled this thing up, and then also productized it. And also figured out, like, hey. People just wanna chat bot.
Speaker 2:They verticalized research through Product. Product. Yep. And it they were the first to do that in a really, really
Speaker 1:meaningful way. And clearly, what whatever was going on at Google was not allowing for that verticalization. There were fantastic researchers and fantastic product people selling ads and making YouTube a great product. They were not talking to each other.
Speaker 2:Yep.
Speaker 1:And so but as the two investing partners celebrated beneath the exposed rafters of LA's hottest new restaurant, four members of OpenAI's six person board, including two with direct ties to the EA community, were holding secret video meetings, and they were deciding whether they should fire Sam Altman, though not because of EA. This account is based on interviews with dozens of people who lived through one of the wildest business stories of all time, the sudden firing of the CEO of the hottest tech company on the planet, and his reinstatement just days later. At the center was a mercurial leader who kept everyone around him inspired by his technological vision, but also at times confused and unsettled by his web of secrets and misdirections. From the start, OpenAI was set up to be a different kind of tech company, one governed by a nonprofit board with a duty not to shareholders, but to humanity. Altman had shocked lawmakers earlier in the year when he told them under oath that he owned no equity in the company he cofounded, which, of course, was true, but very much mocked because the idea of somebody doing something not for money is The senator with
Speaker 2:the line saying, you need a you need an agent.
Speaker 1:You need an agent. Yeah. That's great. He agreed to the unprecedented arrangement to be on the board, which required a majority of directors to have no financial ties to the company, of course, because it's this nonprofit structure. In June 2023, he told Bloomberg TV, the board can fire me, and that's important.
Speaker 1:Behind the scenes, the board was finding to its growing frustration that Altman really called the shots, of course, through, like, soft power, not through legal means or voting shares or any and it's entirely new way controlling a company.
Speaker 2:It's a really great way to expose that you don't actually understand the company because I'm sure if you went to the OpenAI office and talked to anybody, you would find out almost immediately that Sam Yeah. Did in fact call all the shots.
Speaker 1:Of course. Of course. There's a lot of just I agree.
Speaker 2:I know it does.
Speaker 1:This is like what we're talking about with, like, there's intelligence, and Sam is intelligent, but there's other intelligent people around the table. It's really will and agency and vision and coordination and communication and all these, like, EQ things that really push a push an organization forward. And there are tons of smart people at Google. They couldn't launch a chat a lot faster than OpenAI somehow. And it's like, why is that?
Speaker 1:It's probably not because they didn't think of it. They it's not probably not because of a lack of intelligence. It's a lack of volition. For the past year, the board had been deadlocked over which AI safety expert to add to its ranks. The board interviewed Ajayo Kotra, an AI safety expert at the e I at the EA charity Open Philanthropy, but the process stalled largely due to foot dragging by Altman and his cofounder Greg Brockman, who is also on the board.
Speaker 1:Altman countered with his own suggestion. There was a bit of a power struggle, said Brian Chesky, the Airbnb CEO, who's one of the prospective board members Altman suggested. There was this basic thing that if Sam said the name, they must be loyal to Sam, so therefore, the board is gonna say no. The dynamics got more contentious after three board members in the pro Altman camp stepped down in quick succession in early twenty twenty three over various conflicts of interest. That left six people on the board on the nonprofit board that govern the for profit AI juggernaut.
Speaker 1:And you're getting this weird dynamic where the nonprofit board is set up to just like, hey. We just wanna do research and advocate for AI safety, but all of a sudden, we have we've we've birthed this $300,000,000,000. The demon is really just a consumer tech company, I guess. Yeah.
Speaker 2:Consumer applications, actually.
Speaker 1:Yeah. Yeah. Just just app layers. Was right. Your mother was right.
Speaker 1:Altman and his close ally, Brockman, their fellow cofounder Ilya Sytsevier, and three independent directors. These were, the independent directors were Adam DeAngelo, the CEO of Quora and a former Facebook executive Helen Toner, the director of strategy for Georgetown Center of Security and Emerging Technology and a veteran of open philanthropy and Tasha McAuley, the former a former tech CEO and member of the UK board of EA Charity Effective Ventures, and I believe the wife of a celebrity. Right? Tasha McAuley. She's the wife of, I forget who, someone who is in
Speaker 2:Oh, Joseph Gordon.
Speaker 1:Joseph Gordon Levitt. That's right. Yeah. Concerns about corporate governance and the board's ability to oversee Altman became much more urgent for several board members after they saw a demo of g p t four, a more powerful AI that could ace the AP biology test in summer of twenty twenty two. That's good.
Speaker 2:God forbid.
Speaker 1:The AP biology test gets aced. It's over for us.
Speaker 2:It's over.
Speaker 1:It's not
Speaker 2:a neurotic statement. That was humanity's final exam.
Speaker 1:Yes. Yes. Yes.
Speaker 2:The real one.
Speaker 1:Yes. It's over. Once you can ace AP bio. AP bio. Once you understand AP bio, you can do anything.
Speaker 2:You can reverse engineer anything you want.
Speaker 1:Anything you want. I mean, to Steelman, like, you know, you understand bio. You maybe understand bioweapons. Maybe you help someone in the in the basement, create a new smallpox, and create some like, there are guardrails that need to be on these things, obviously. But it's, you know, it's a little bit overblown, I I think.
Speaker 1:Things like ChatGPT and GPT four were meaningful shifts toward the board realizing that the stakes are getting higher. Here, Toner said, it's not like we are all going to die tomorrow, but the board needs to be functioning well. And, yeah, that's a fair that's a fair critique. Like, the the structure of the company was not aligned with what was happening at the company. Yeah.
Speaker 1:At the same time, it feels like they're acting like like everyone was gonna die tomorrow. Yeah. Right?
Speaker 2:Yeah. And what we actually talked about this
Speaker 1:Yeah.
Speaker 2:Before we got on air. But I think what if if you really if if they wanted to have a sort of strong case for their actions, there should have been sort of information being fed to them from active members of the research team the team broadly saying, hey, we're very worried about this. Yep. It's not just a cute chat app that's gonna go viral.
Speaker 1:Yep.
Speaker 2:This this could sort of snowball into something else, but it doesn't seem it seems like they were doing Yep. Their sort of own vibes based analysis of the situation. And just frustration feeling like having an inflated sense of self importance when realizing they weren't a real player in the game.
Speaker 1:Yeah. I completely agree. So, I mean, most of the board was nontechnical, not actually interfacing with the changes at a technical level that were happening in Yeah. The iterations of the GPT models. Yes, they were scaling up, but how were they doing on evals?
Speaker 1:What how were these systems integrated? How are the what what systems do these do these models have access to? All of those things. Completely agree with that. The other thing is, like, what basically, what you're advocating for is, like, a whistleblower program, which kind of exists in every company.
Speaker 1:Like, if you've ever been on a board and some low level employee emails you, like, it's probably not good. They're probably complaining about something, and a lot of times they have good points. What was interesting is that there was a story that came out that OpenAI had some really onerous exit process where if you left and you didn't sign an NDA, they could claw back some of your equity. And everyone was like, this is terrible. And I was like, yeah.
Speaker 1:It seems like maybe overly aggressive, but it's kind of it's kind of super aligned with the EAs. Because if you actually think that OpenAI's new model is going to kill all of humanity, then your money is worthless. Right? And so you should absolutely say, well, the money is useless. Let me go to the media and say what's And and then I'll stop it, and then I'll be and I'll write a book, and I'll make money that way.
Speaker 1:Like like, there's a million ways to make money if you're a successful whistleblower and you actually do uncover, hey. Yeah. I'm the person that saved humanity, and I can prove it because this model, I have proof that it was gonna kill everyone. Yes. I lost my shares in OpenAI, but it doesn't matter because I saved the world.
Speaker 1:And, like, what would be more edifying and then also financially rewarding over the long term than saving humanity?
Speaker 2:Ben would Ben would go to the press and say, John and Jordy are considering going to eight hours a day of live streaming, which I think might, you know, massively slow GDP growth or potentially even go negative because people just stop. All work and just listen to the show. Yeah. It's a real asymmetric risk.
Speaker 1:Yes, definitely. Yeah. I mean, we've advocated for podcast safety very significantly and potentially even creating like a government Regulatory capture. Regulatory capture and monopoly.
Speaker 2:Basically, there should be 10000% tariffs on podcasting.
Speaker 1:There should be like an FDA for podcasting where you have to submit an application with what your show is about, and then you get approved. It takes years, costs hundreds of millions of dollars. Exactly. And then you can't just have some willy nilly podcast just drop an RSS feed.
Speaker 2:Yeah. Just like pharmaceuticals. Exactly. We don't want anybody coming to market with a new pharmaceutical I completely agree.
Speaker 1:This makes so much sense.
Speaker 2:Shouldn't have anyone come into market with the podcast. For sure.
Speaker 1:Anyway, let's move on to equally ridiculous things. Toner and Macaulay had already begun to lose trust in Altman to review new products. And this is interesting because we we haven't actually heard before how the safety process worked within OpenAI. Yep. And they do a good job breaking it down here.
Speaker 1:So to review new products for risks before they were released. And again, I agree that when you're launching new products, like, should assess them for safety. Even even Instagram, when they launch a new algorithm, they should be like, is this promoting anorexia? Is this promoting extreme content or not? And and to what degree is it making people sad?
Speaker 1:Is it making people happy? Like, let's understand the impact of the technology that we're building. That's great. And here's how they wound up doing it. So OpenAI had a joint safety committee board with Microsoft, a key backer of OpenAI that had special access to use its technology and its products.
Speaker 1:So so Microsoft can vend GPT models into Azure and then deliver them into different, segments of the Microsoft ecosystem. During one meeting in the winter of twenty twenty two, as the board weighed how to release three somewhat controversial enhancements to GPT four, Altman claimed all three had been approved by the Joint Safety Board. Toner asked for proof and found that only one had actually been approved. And so this is like the smoking gun. Like, Sam Altman, like, lied about the approvals.
Speaker 1:And and it's like, okay. Like, author, follow-up. Like, what are the what are the controversial enhancements? Like, I remember what the difference between g p t 3.5 and g p t four were. It was context window went from four k tokens to 32 k tokens.
Speaker 1:They trained on 10 times as many tokens. Like, we saw we saw the the the the small dot and then the big circle, and everyone was scared. I get that everyone was scared, but, like, what is the problem? Like, like, g p t four. Yeah.
Speaker 1:Okay. Yeah. I can do PDF upload. Is that gonna kill everyone? Like, what is the problem that you're actually upset about?
Speaker 1:I understand that, like, yes, he might have, like, not followed the right rules in this case, but, like, habeas corpus. Like, we have to produce some sort of corpse here. And, like, what is this controversial enhancement to g d four? We've all been playing with g p d four for years now. It's no big deal.
Speaker 1:It's been copied, open source, DeepSea can run it on your phone. You can download it on GitHub, Mysteral, and, like, all these all these g p d four level models. Like, what are we worried about? I don't understand. Like, it's so it's so confusing and infuriating that the author didn't dive in deeper and say, like, what were the controversial enhancements?
Speaker 1:Because to me, the controversial it was a controversial lack of enhancements. It was like g g g four was great, but it wasn't good enough. It should have had more. I immediately, I was like
Speaker 2:Go bigger.
Speaker 1:32 k context window, not big enough. What what did Google do? They launched a hundred k context window. They launched a million token context window. Is that controversial?
Speaker 1:I don't know, but it's super useful if you're trying to deep dive a book. Like, I don't get it. I'm so confused by this. Anyway and then this is the really funny thing because they're they obviously like, all the all the hate's been on Sam. But Microsoft is just, like, doing crazy stuff this whole time.
Speaker 1:So around the same time, Microsoft launched a test of the still unreleased GPT four in India. And they're just like, yeah. Let's just try it in India. Like, who cares? Like, we don't need to ask anyone.
Speaker 1:Like, let's just do it. The first in just this is the first instance of the revolutionary code being released in the wild.
Speaker 2:Not the first time
Speaker 1:They're just like
Speaker 2:Bill Gates.
Speaker 1:Let's test it on India.
Speaker 2:Bill Gates organizations have Testing it.
Speaker 1:Done things like that. Yeah. It's like, what what if it was Microsoft? Like like, you're you're afraid of AI safety. Like, what if it had gone wrong and, like, completely screwed up India?
Speaker 1:Like, that would be bad. Right?
Speaker 2:It's, like, so also Internet products. So
Speaker 1:Yeah. I mean, I think it's fine. I I I think it's great to go test these things wherever. I don't think it's a big deal. But it's just very funny that, like, like, Microsoft is just like, oh, yeah.
Speaker 1:Like, let's not deal with that stupid joint safety board. Like, let's just rip Yeah.
Speaker 2:Like Bulls in.
Speaker 1:My boys back in
Speaker 2:year, we need g p support. Chat summary in Teams. Like, we're gonna take the risk. We're gonna risk humanity for chat
Speaker 1:It's so funny that that's what we're talking about. Like, we're talking about, like, Copilot in Outlook and stuff. Like It's it's so low stakes in some ways. It's like better autocomplete in many ways. Anyway, so so Microsoft launches GPT four in India.
Speaker 1:They don't get approval from the Joint Safety Board, and no one had bothered to inform OpenAI's board that the safety approval had been skipped. The independent board members found out when one of them was stopped by an OpenAI employee in the hallway on the way out of a six hour board meeting. Never once in that meeting had Altman or Brockman mentioned the breach, probably because it's not that important that Microsoft is testing GPT four in Indian
Speaker 2:It's just a very funny dynamic to be in this sort of hyperscale startup mode where you're scaling headcount rapidly. You're raising all this money. You're scaling users sort of dramatically, right? Yeah. And then you have this board that's sitting over your shoulder kind of like nitpicking every small decision that you make.
Speaker 1:And for a completely new class of vectors. Like, when I hear like, if I was running a company and somebody was, like told me, like, hey. Like, a partner of ours just launched our product, kinda, like, white labeled it, and they launched it in India. I would be like, okay. Well, like, what does that mean for my brand?
Speaker 1:Like, maybe I want my brand to be front and center in that market, and so I don't love that because they're kind of, like, stealing my brand building that will eventually happen when I get the chance to roll out in that country. So that's an issue. Are we paying taxes appropriately? Are we legal there? Do we have a do we have a registration properly?
Speaker 1:Or is this gonna be a PR nightmare? Like, or or do we have another partner that was expecting to work with us in India, and now they're How
Speaker 2:how are they ensuring that they're putting enough spend behind the go to market?
Speaker 1:Yeah. There's, like, a million They're successful. Questions. And then the second one is, like, is is ASI going to explode in India and, like, kill everyone? And it's like, that's not at the top of my stack.
Speaker 1:But so, anyway, then one night in the summer of twenty twenty three, an OpenAI board member overheard a person at a dinner party discussing OpenAI's startup fund. This is the venture capital fund that they use to invest in startups that could potentially use OpenAI's technology. Basically, giving money to the companies that they're gonna steamroll with the next generation of ChatGPT, it seems like. But I guess there are some companies that probably took money from StartUp Fund and, like, wound up doing something application layer that was specific to doing great.
Speaker 2:Took money from OpenAI.
Speaker 1:Yeah. Yeah. It's like that makes sense because it's, like, it is much more, like, narrow than just being, oh, we're, like, we're, like, you know, slight prompt engineering on top of ChatGPT, and we're gonna get rolled.
Speaker 2:Sam's a dog. I don't think he was wanting to use a startup fund to fund potential competitors. It was very oriented around, you're going to be using ChatGPT. We'll give you some money.
Speaker 1:Yeah. Truly, you're going to be an API customer forever, and you're never going to train a foundation model. So the fund was announced in, 2021 to invest in AI related startups, and OpenAI had announced it would be managed by OpenAI, but the board member was overhearing complaints that the profits from the fund weren't going to OpenAI investors. This was news to the board, so they asked Altman. Over months, directors learned that Sam Altman owned the fund personally.
Speaker 1:OpenAI executives first said it had been for tax reasons, then eventually explained that Altman had set up the fund because it was faster and only a temporary arrangement. And knowing everything about OpenAI's insane legal structure, I believe this. Like, it it's it's hard to set up a fund as, like, a Delaware c corp or like an LLC. Like, it's tricky. Like, there's not exactly, like like, Carter or AngelList for setting up a fund when you're a for profit that's owned by a nonprofit that's operating all these different things and has this ownership by Microsoft and it's all this complicated cap table.
Speaker 1:Yeah. Like, I'm sure it was very onerous to set up something like this. Funny scenario too
Speaker 2:because Yeah. The Sam Altman, even if he was fully embracing the sort of EA mode, should be in the mindset of like, I need to ignore
Speaker 1:Yeah.
Speaker 2:The board, like some aspects of the board's, you know, desires because if other competitors like Totally. Seek beat us, you know, that that wasn't really on the radar at the time. I'm sure it was if you were building a foundation model. Yeah. If other people beat us to this, none of it even matters.
Speaker 2:Like Yeah. All this stuff doesn't matter. So we need to, like, take some shortcuts.
Speaker 1:Yeah. Yeah. It's like it's like, what what did Ilya see? What did Sam see? Sam saw commoditization of the foundation model layer and knockout, drag out fight in the application layer and was like, I gotta go fast, and this is a consumer tech company.
Speaker 1:But here here is here is a line that I think we need to take much, much more seriously. So the the the truth came out about the structure of the of the startup fund. OpenAI said Sam Altman earned no fees or profits from the fund, an unusual arrangement. So he had no carry and no fee from that fund, and and that is something that I think is is unacceptable. Like, any fund manager should be getting two and twenty at least.
Speaker 1:That's right. The great funds, you know, three
Speaker 2:Three and twenty.
Speaker 1:Four percent.
Speaker 4:We've seen
Speaker 1:three and thirty before. And so this is a big problem. If I was investing in a fund and someone was like, oh, like, no no fee, no profit, I'd be like, yeah. Straight to jail. Yeah.
Speaker 1:Yep. But, of course, that actually aligns with OpenAI's incentives. We're joking. But it is it is very odd. And and mean, it does it does kind of align with this idea that he's doing fast.
Speaker 1:Yeah.
Speaker 2:OpenAI said Altman earned no fees or profit from the fund. That he could have had two. I I I think your interpretation is There's also another interpretation where
Speaker 1:He just hadn't hit the hurdle yet.
Speaker 2:He have two and twenty, but then, you know, it whatever.
Speaker 1:I think
Speaker 2:I But but I think you're in
Speaker 1:Yeah. If I had if I had to guess on what the actual structure was, it was like, technically, he owned the the GP, but and and OpenAI was an LP in the fund. And he so he had legal control over it, but all the money and profits would flow through without anything passing to
Speaker 2:him.
Speaker 1:Yeah. That and that seems like the fastest and most
Speaker 2:Clearly, Sam Altman, ridiculously talented venture investor Yeah. Like, should be the one at OpenAI making the final decision on what companies are getting invested in or not. Yeah. And that's in the best interest of Yeah. OpenAI.
Speaker 1:And and also it's like in what scenario is he making is it like, the idea of him like, he could just start a separate venture fund and go raise a billion dollars and just be like, hey, guys. I have two jobs now. And everyone would just be like, yeah. That's fine. So if you wanted to make two in 20 on a lot of money, he could go do that.
Speaker 1:And so it it it doesn't quite track that like like, what frame of mind would he have to be in that it's like, okay. I'm, like, the CEO and leader of the next generation consumer tech behemoth, but, really, I wanna run, like, a micro fund and get, like you know, have control over it. Like, it's like it's the economics just don't really make sense there. And this is what I'm talking about, like, about a follow-up. It's like, you should follow-up and be like, at what was actually at stake if he had if he had taken the fees and the profits from the fund?
Speaker 1:Like, $10,000,000? A hundred million bucks? Like, certainly not 10,000,000,000.
Speaker 2:That would have been sort of a smoking gun if he was out in the world saying, don't have any profits in OpenAI. Yeah. And then was like running the side fund and managing it.
Speaker 1:Yeah.
Speaker 2:And it was branded. But but the fact that he's not earning fees or profits from the fund just goes to show like, okay. Yeah. That should be like, the board should be able to be like, small slap on the wrist, like next time go through the proper process. Totally.
Speaker 2:Make off balance sheet investments from OpenAI in the meantime. Yep. Just do it the right way.
Speaker 1:Yeah.
Speaker 2:And he would still say, like, you know, yeah, probably should have just done it, but, you know, whatever. We are moving quickly.
Speaker 1:It's so funny if, like it's like, oh, yeah. Like, he set up this fund. Like, he personally owns, like, 30% of Anthropic now. Like, he's just like, I can't deal with these guys at OpenAI. I'm just gonna go turbo long with direct competitor.
Speaker 1:No. It makes no sense. And and and there really is no smoking gun in the portfolio that I that that I've seen at least. But, again, like, follow-up. To the independent board members, the administrative oversight defied belief.
Speaker 1:I can't believe it. The cast previous and cast previous oversights as a possible pattern of deliberate deception. For instance, they also had been alerted to the previous fall. They they also hadn't been alerted the previous fall when OpenAI released ChatGPT, at the time considered a research preview that used existing technology, but ended up taking the world by storm. In late September, Susquehanna emailed Toner asking if she had time to talk the next day.
Speaker 1:This was highly unusual. They didn't talk outside of board meetings on the phone. Susquehanna before coughing up a clue. You should talk to Mira more. And so, yeah, the the launch of Chattypie is funny because I've seen I talked about this with Dorkesh saying that, like, there's two criticisms of Sam.
Speaker 1:One is like, he's nontechnical. He's never done anything. He's not, like, creating really new technology. And then the other one is like, he moved too fast. He launched ChatGPT without, you know, telling the board.
Speaker 1:And it's like, only one of those can be true. Right? Like, can do one or you can take one or the other, but I've seen people try and take both. And I'm like, that just doesn't match. Yeah.
Speaker 1:But, yeah, I mean, I don't know. Probably should have told the board you're launching ChatGPT. I understand that it was this research preview. They did launch a lot of different things, the OpenAI sandbox, they launched Playground, and none of the other things had taken off like ChatGPT. Maybe it was just lucky.
Speaker 2:And the use cases for that early research product were pretty tame. It was like generate the, you know, subject line for your ecommerce brand. Yeah. That's what people were building on top of it. There was nothing agentic.
Speaker 1:Yeah. Yeah. Yeah. It was
Speaker 2:just like a text generation tool. Yeah. That was that was good, but not even great. It was magical Yeah. But it wasn't
Speaker 1:anything where it is If you just think about the, like the frame of mind that they're in, they're like, we are training this like AGI god, and we're spending a hundred million dollars on a training run. Like, this and and, like, if it comes out wrong, it could be, like, really misaligned, and it's, like, really high stakes. And so, like, I imagine that that's where the board's focused. They're not focused on, like, Oh yeah, you know how we're vending in the GPT 3.5 DaVinci zero zero two into a bunch of companies? We created a wrapper on that that just lets people chat with it directly.
Speaker 1:And it's just like a new UI on top of the LLM that we already trained. Like when you frame it like that, it's like, Oh yeah, they did just kind of whip up the ChatGPT interface in like a couple days, I imagine. I mean, maybe it was a couple weeks. Like the first version of ChatGPT was not, it was really just like a wrapper around their LLM, And it didn't feel like that would be the thing that broke it through. It felt like the thing that would break it through is scale it up another 10 x.
Speaker 1:Spend a billion dollars. Do Stargate.
Speaker 2:You know?
Speaker 1:Right? Yeah. Anyway, let's go into Mira. So Mira Moradi had been promoted to chief technology officer of OpenAI in May of twenty twenty two and had effectively been running the day to day ever since. Absolute dog, operational mastermind.
Speaker 1:You'll love to see it. When Toner called her, Moradi prescribed corporate athlete for sure. Maradi described how what she saw as Altman's toxic management style had been causing problems for years and how the dynamic between Altman and Brockman who reported to her but would go to Altman anytime she tried to rein him in made it almost impossible for to to for her to do her job. And so that is a very odd dynamic. I believe Grokman was a Grokman was a cofounder.
Speaker 1:And, obviously yeah. Grokman. Glockman. Altman and Brokman go way back because Altman was an investor in Stripe and Greg was the CTO. And so they go way, way back.
Speaker 1:They cofounded the company together, and now there's, like, this this layer in between them that's Mira Moradi, who's clearly, like, an incredible talented corporate athlete, but it is a little weird. Like, imagine if, like, it was like, oh, yeah. I'm actually hiring someone, you'd report to them and they report to me. Like, you would still come to me and be like, yo. Like, what?
Speaker 1:Like, we go way back. It's just weird. And so however they got in that situation, it just seems bad. Right? Yeah.
Speaker 1:Maradi had raised some of these issues directly with Altman months earlier, and Altman had responded by bringing the head of HR to their one on one meetings for weeks until she finally told him she didn't intend to share her feedback with the board. Brutal. That sounds not fun in general. Toner went back to Sutzkever, the company's oracular chief scientist. Love that word.
Speaker 1:He made clear he had lost he he made it clear that he had lost trust in Altman for numerous reasons, including his tendency to pit senior employees against each other. In 2021, Sutskever had mapped out and launched a team to pursue the next research direction for OpenAI. That's probably the reinforcement learning, the o one, the the QSTAR technology that they were working out at the time that was very controversial, but then, you know, DeepSeek open sourced it. And so it's like, was it really that big of a deal? We don't know.
Speaker 1:Sutskever had mapped out this, and launched team to pursue research, but months later, another OpenAI researcher, Jacob Pachacci, began pursuing something very similar. And so the teams merged, and Pachacci took over after Sitzkiewer turned his focus to AI safety. And so Altman later elevated Pachacci to research director and privately promised both of them they could lead the research direction of the company, which led to months of lost productivity because you have two groups. It's like the Google strategy, basically. Yep.
Speaker 1:And so, you know, just like kind of like, you know, a lot of weird dynamics of, like, who's really in charge. The board is misaligned with the management. The management team is misaligned with, like, the cofounders who are clearly, like, brilliant but maybe not operational. And so you throw some operational talent in between them, and then all of a sudden, have a cofounder who's super technical that's reporting to someone who's, like, an executive, and it just gets very,
Speaker 2:very mushy. Two challenges. One, you have a nonprofit research group that's transitioning into a for profit hyperscaler. Yep. And then you also have this business that's verticalizing, that's doing everything from research to the end product layer.
Speaker 2:Yep. They're also doing an API, and they're also still worried about AI safety. Yep. And then you combine that with products that are generating, you know, millions and then tens of millions and then hundreds of millions of revenue that's also losing so much money that, like, if Sam can't, you know, continue, he's on the sort of fundraising treadmill treadmill. So it's like, well, we need to keep growth at, you know, at this sort of ridiculous rate.
Speaker 2:We, you know, and and this is like even before all the nonprofit for profit conversion stuff. So again, I know how to juggle. You
Speaker 1:Can do a Rubik's cube.
Speaker 2:You can juggle on a corporate level, but not you can do a Rubik's cube.
Speaker 1:You can do Rubik's cube, but I can't I can't juggle unfortunately. It's on my twenty twenty five goals I gotta learn. Yep. Anyway, the this is another wild, wild story again. Treading carefully in terror of being found out by Altman, Maradi, and Sutskever spoke to each other each of the independent board members over the next few weeks, it was only because they were in daily they they were in daily touch that the independent directors caught Altman in a particularly egregious lie.
Speaker 1:Toner had published a paper in October that repeated criticism of OpenAI's approach to safety. Altman was livid. He told Sutzkever that Macaulay had said Toner should obviously leave the board for the for the over the article. Macaulay was taken aback when she heard the account from Sutzkever. She knew she had said no such thing.
Speaker 1:And so how would you feel if I published a paper saying TBPN is the worst thing to ever happen to media and podcasting generally?
Speaker 2:I was just catching up here.
Speaker 1:Like, it just seems like an offensive thing to do. Like, I don't know. It's just weird. It's weird to be on the board of this company and then publish a paper that's like Oh, you were
Speaker 2:referencing that. Was looking at the dynamic of Setskiver McCauley.
Speaker 1:Oh, that whole thing. Yeah.
Speaker 2:Yeah. Yeah.
Speaker 1:Yeah. I mean, it this this alleges that Sam, like, was basically, like, mom and dad. Like, mom said I can watch TV. You gotta go ask dad, what does your mom say? It's like that dynamic.
Speaker 1:Right? But I just think the whole the whole core is so weird to not just you're on the board. Like, if you're not comfortable with the approach to safety,
Speaker 2:like Sounds like that should have been an internal memo.
Speaker 1:Should have been an email.
Speaker 2:And you decided to risk what was already, I'm sure, anybody on the board putting out basically like activist materials Yeah. When you're supposed to be working internally Yeah. With management to sort of get like it seems like a very extreme step to just sort of, you know, put this thing out Yes. Randomly.
Speaker 1:And I mean, this is what I keep coming back to is like, let's use our example. Like if you published a paper saying like TBPN's going downhill, like it's a disaster.
Speaker 2:John's not taking podcast safety seriously enough.
Speaker 1:Like like, first, I would be, like, offended, like, hey, why didn't you just talk to me, like, like, off mic or, like, even on the show? Like Yeah. You could talk to me whenever.
Speaker 2:We're live for three hours a day.
Speaker 1:You you you could email me. There are a million ways that you could bring that
Speaker 2:up. But
Speaker 3:could use I could use the horn.
Speaker 1:Yes. You could use the horn to let me know that you think I'm not doing a good job. But the thing that that would really irk me is if you did that and you were wrong. Like like, okay. So you're Toner, you're criticizing OpenAI for the approach to safety?
Speaker 1:Habeas Corpus. Show me the damage that OpenAI's approach to safety has done. Like, what are we talking about here? What is the damage? Is it
Speaker 2:part of it
Speaker 1:And there's a million ways you don't even have to do the killing everyone. You could say, hey. Like, the election is going to swing because of OpenAI's products. Like, did that happen? No.
Speaker 1:Like Yeah. No. Not at all.
Speaker 2:I think the I think it comes down to if you're the safety guy or girl Yeah. And you care a lot about your job, it's part of your identity, it's part of your entire brand is AI safety Yeah. All this EA stuff.
Speaker 6:And
Speaker 2:then your job's actually really not that important at the moment.
Speaker 1:Yep.
Speaker 2:And you feel like you're not getting enough attention. Yep. Like this just come this just screams like, I want attention. Like, give me attention, validate my beliefs, make me feel important. Yeah.
Speaker 2:Which is like a board member who has an inflated sense of self importance and wants to be a star. Yeah. It's like a pretty toxic dynamic.
Speaker 1:Yeah. Also, it's just like there's no there's no relative assessment here. Like, okay. You have a problem with OpenAI's approach to safety. Let's compare it to DeepSeeks.
Speaker 1:Let's compare it to Google's. Let's compare it to Grox. Let's compare it to Mistral's. Like, rank it for us, Helen. Like, tell us, like, where where are we sitting here?
Speaker 1:Because, like, a, I'm not seeing any damage from OpenAI products. What I'm seeing is, like, a minor product boost productivity boost for knowledge workers. Like, people, like, can find answers to things a little bit faster, write some copy a little bit out.
Speaker 2:Yeah. Basically, it's a great, you know, copilot Yeah.
Speaker 3:For your work.
Speaker 1:I know something. Like, wait.
Speaker 2:So it's it's what I've been using it for?
Speaker 1:Even the job displacement thing. Right? Like, if if we were seeing, like, oh, wow. Like, unemployment is really ticking up. Like, this is having an impact.
Speaker 1:I'd be like, okay. Yeah. Like like, we do need to talk about this. We need to really think about Get ahead of it. Yeah.
Speaker 1:Yeah. And and maybe maybe it's, you know, some sort of regulation. I don't wanna, like, lock this down too much, but maybe there's some sort of, like, you know, incentive or tax tax benefit for human workers. Like, we're doing something to take care of Americans. Like, I I'm all here to debate that and discuss that.
Speaker 1:There's probably a good way. Sam's talked about UBI before. Like, there are ways to address, like, serious drop job losses, but we we have, like, 3% unemployment. Like, we're not in some, like, massive AI disruption to jobs yet. Like, there's no damage being done and so like what are you talking about?
Speaker 1:Anyway.
Speaker 2:Yeah, it's almost like that board level role around AI safety and thinking about the impact of AI is like if Toner was writing research papers advising governments globally saying like, here's how we see this technology panning out. Totally. Here's how you need to be planning around Totally. Employment and what the future of sort of various
Speaker 1:career professionals That'd be very
Speaker 2:valuable and beneficial Totally. AI.
Speaker 1:But that person should report to the CEO and and be like, hey. Here, I have I have a I have a bunch of ideas about how we should be positioning this. Yeah. How should we be messaging this to governments? Like, I'm almost in, like, the PR department, and and I'm and I'm doing and I'm publishing all this.
Speaker 1:Not higher up, like like, taking shots at the company you're you're in charge of. Anyway, I totally get the the criticism that, like, getting mommy daddied is annoying. I totally get that. I think that that is a legitimate criticism here. And so it clearly also pisses off Ilya and and Meera because they'd been collecting evidence, and now Sotzgiver was willing to share.
Speaker 1:He emailed Toner, Macaulay, and DeAngelo a lengthy two page two lengthy PDF documents using Gmail's self destructing email function, which I didn't know was a thing. Google, what are you doing? Like, you haven't surfaced that to me. Did you know that there's self destructing emails in Gmail? Never heard of
Speaker 2:that. Fun.
Speaker 1:Give it a try, folks, when you're whistleblowing. One was about Altman. Other about Brockman. The Altman document consisted of dozens of examples of his alleged lies and other toxic behavior largely backed up by screenshots from Maradi's Slack channel. In one of them, Altman told Maradi that the company's legal department had said that GPT four turbo didn't need to go through the joint safety board review.
Speaker 1:Again, it's like, what is GPT four turbo? It just, like, say model runs faster?
Speaker 2:Do Turbo.
Speaker 1:Need a joint safety board review? It seems like he kind of has a argument there, but then Moradi checked with the company's top lawyer, and the lawyer said that's that he had not said that.
Speaker 2:Again, I keep going back to this, but being in this sort of existential fight for your right to be a company, which is you're losing money, you're growing quickly, but you need to be raising more money and you have a lot of heavily funded competitors.
Speaker 1:Yep.
Speaker 2:And then you have the joint safety board review who says we need another review before you can release a slightly updated version of of a product that's already been out in market being utilized.
Speaker 1:Mean, I can't imagine Frustrating. Anything more cumbersome than having to do like
Speaker 2:heavy And honestly, it's such you kind of understand what was happening and
Speaker 1:GitHub commit.
Speaker 2:Again, like the critiques of of Sam not being consistently candid Yeah. Which was what they eventually came out
Speaker 1:Yeah.
Speaker 2:And said. It's like, yeah, that's fair. But at the same time, the fact that he would was has been able to lead the company to hundreds of millions of users Yeah. Staying at the edge and winning at the sort of app layer right now despite all of this Yeah. Is just a testament to his executive
Speaker 1:Yeah. I agree. Yeah. Yeah. It's odd.
Speaker 1:So anyway, on the afternoon of Thursday, 11/16/2023, he and three independent board members logged in to a video call and voted to fire Altman, knowing Moradi was, like, unlikely to agree to be interim CEO if she had to report to Brockman. They also voted to remove Brockman from the board. After the vote, the independent board members told Zutzkever that they had been worried that he'd been sent as a spy to test their loyalty. That night, Maradi was at a conference when the four board members called her to say they were firing Altman the next day and asked her to step in as CEO. She agreed.
Speaker 1:When she asked why they were firing him, they wouldn't tell her. And this was kind of odd because, like, she had been in earlier conversations. And just a couple paragraphs earlier, says, and Maradi were collecting evidence. Like and and and there's a whole bunch of these examples where it's like, where you'd think that Sutskever was communicating with her so she should know, But then she's like Maradi's like, I don't I don't know why you would fire him, which is kind of weird because Yeah. Earlier in this article, they're saying, like, she's really annoyed by all the things Sam's doing.
Speaker 1:So shouldn't she just immediately know that, oh, yeah. They're firing him for all the stuff we talked about. But that's, like, very odd. Like, why would she even ask? And then she really goes on the offensive and says, like, have you considered have you communicated this to Satya Nadella knowing how essential Microsoft CEOs the CEO's commitment to their partnership was to the They had not
Speaker 2:remember I remember when this was happening, it clear, you know, it was so haphazard and so shocking Yep. Sudden and the team didn't know what was going on. And there's all these sort of lay it's like, you know, it's like a family in fighting. Right? Like, they're all on the same team, but clearly there's like power struggle and dynamics.
Speaker 2:And then, you know, like, Satya is kind of like the the grandpa that he's like, really guys? Like, you know, like, you you didn't consult me on this? Yeah. Like, that's insane.
Speaker 1:And and like, from his perspective, like, you guys are fighting over auto complete and Clippy? Like, the new Clippy.
Speaker 2:The new Clippy.
Speaker 1:You guys are building the new Clippy.
Speaker 2:He's like, I just wanted the new Clippy faster.
Speaker 1:And then
Speaker 2:We already tested it in margin, and it's a
Speaker 1:hit. And it's
Speaker 2:a Let's roll it out
Speaker 1:to the
Speaker 2:rest of the world.
Speaker 1:Yeah. And so all all the board said was that he had not been consistently candid with the board. Friday night, they the OpenAI's board and executive team held a series of increasingly contentious meetings. Maradi had grown concerned that the board was putting OpenAI at risk by not preparing, by not better preparing for the repercussions of Altman's firing. At one point, she and the rest of executive team gave the board a thirty minute deadline to explain why they fired Altman or resigned.
Speaker 1:And, again, this is odd because it's like it feels like she's kind of on the executive team and saying, like, we're all wondering why you got fired, Sam, but, like, she knows or she should know. And so it's like, why didn't she just turn around and say, hey, everyone. I actually know why he got fired. Like, I was in these conversations. It's very odd.
Speaker 1:Yeah. The board felt they couldn't divulge that it had been Maradi who had given them some of the most detailed evidence. They had banked on Maradi's calming employees while they searched for a CEO. Instead, she was leading her colleagues in a revolt against the board. Isn't that a crazy thing to do?
Speaker 1:That's so wild. So, like like, the board is like, well, like, this is our next CEO, Mira. She's asking us to explain why we fired Sam, but she's the reason that we fired Sam. And we can't say that because if we do, we lose her as the CEO. It's a it's like this crazy power struggle.
Speaker 1:It's, like, so wild. It's like Game of Thrones over there. A narrative began to spread among Altman's allies that the whole thing was a coup by Sutskever driven by his anger over Pachaki's promotion and boosted by Toner's Toner's anger that Sam Altman had tried to push her off the board. This was like the I love you all, Ilya, like, coded tweet from Sam where he he said he signed his tweet, I I love you all, which is I l y a, Ilya. And so it was like, oh, he's, like, pointing the finger at Ilya.
Speaker 1:And so then this, like, whole, like meme of like Ilya is responsible. When really it seems like it was like a lot of people were involved and stuff. You've heard the whole story now. Crazy. Setskover was astounded.
Speaker 1:He had expected OpenAI's employees of OpenAI to cheer. Bad. Bad.
Speaker 3:They love speculation.
Speaker 1:People love consumer tech companies. Like, they're like, we're making clippy here, guys. Clippy but good. I don't care about his safety mumbo jumbo. Clippy but good.
Speaker 2:Cracked Clippy.
Speaker 1:Cracked Clippy. Like, what are you talking about, Elia? I just wanna get a second beach home and and vend Clippy into every Microsoft product.
Speaker 2:It's crazy that cracked clippy.com is available.
Speaker 1:Crackedclippy.com. Go get it.
Speaker 2:Go get it.
Speaker 1:Start the jack o p t wrapper. Yeah. Get sued by Microsoft. Yeah. I'm sure they still own them.
Speaker 2:They will come after you hard. So By Monday morning, almost all of them had signed a letter threatening to quit if Altman wasn't reinstated. Among the signatures were Marathi's and Setskivers.
Speaker 1:Such a twist. Actually, I really like the Oh, yeah. I mean,
Speaker 2:this
Speaker 1:We
Speaker 2:got we're on
Speaker 1:TV. Is intelligence versus EQ.
Speaker 2:Right? Yeah.
Speaker 1:The the intelligence thing says that, well, legally, we can fire Sam. But the the emotionally intelligent thing is, like, actually, a lot of people like Sam, and OpenAI is nothing without its people. Right? To be clear, don't think Sam actually talking. Was like, I'll just go to Microsoft Research and hire everyone back that likes me, and a lot of people like him.
Speaker 2:Yeah. And no one else for sure was getting 40 on 300 done.
Speaker 1:For sure.
Speaker 2:Outside of Sam.
Speaker 1:For sure. So let's hit the size gong one more time for Sam and the OpenAI team. What a funny story. Anyways. It's one of the greatest tech stories of all time.
Speaker 1:I mean, I know it was very stressful and frustrating for a lot of people involved. But, you know, we I don't know.
Speaker 2:Just The story's not over.
Speaker 1:The story's not over. It's still a knockout drag out fight. Anyone could win his OpenAI Yahoo
Speaker 2:Jessica Livingston. Google. Let's figure it out. Jessica Livingston Yeah. Co founder of YC had a has a podcast and she had Sam on to talk about basically that weekend
Speaker 1:Oh, cool.
Speaker 2:And the craziness. And so worth a listen. But Sam, just like him walking through the sort of like step by step conclusions and then he's a deal guy. So like he was at certain points that weekend was basically like, yeah, I can just go to Microsoft and keep doing what I'm Yeah.
Speaker 1:I mean, I know that if I was in his position and I was super stressed during this weekend, I would wanna just get out of the city. I would wanna go to Wander and Sonoma Vineyards. I would go to wander.com and I would try and find my happy place, I would say. I would actually be singing to myself. I'd be singing.
Speaker 2:Find happy place. Your happy place.
Speaker 1:Book a wander with inspiring views. Hotel grade amenities, dreamy beds, top tier cleaning, and twenty four seven concierge services.
Speaker 3:Wander here
Speaker 1:is such a
Speaker 2:fantastic idea. It's basically take some of the greatest homes in the world and make them a bit available by the day.
Speaker 1:You love this too much. We can do two days, three
Speaker 2:days, four days.
Speaker 1:Thirty days. In Sonoma Vineyards is fantastic. Four bedrooms, three and a half baths, five beds, eight guests, 3,400 square feet. There's a pool.
Speaker 2:I love in Sonoma County. So if you go stay at that wander, DM me. I'll give you some recommendations.
Speaker 1:Yeah. You almost got sucked into that whole East Bay Rationalist thing, the EA thing.
Speaker 2:I was born You're born in dark on on the dark side
Speaker 1:of it. You merely adopted it. I was born in it. You're ready to Oakland Children's Hospital. We'll get Eliezer on here.
Speaker 1:You can debate him.
Speaker 2:Yeah.
Speaker 1:You can be like, but what if I turn the server off? What if I pull the plug, Eliezer? What happens then?
Speaker 2:What if I have what if what if there's bad robots but I have a good army of robots powered by Brett Adcock's Figure AI? What then?
Speaker 1:What then, Eliezer?
Speaker 3:What are
Speaker 2:you gonna do about that? Owned. They don't even walk like Biden anymore.
Speaker 1:Yeah. Anyway, let's move on to Intel. Everyone's wondering what's gonna happen with Lip Buuton, the new CEO of Intel. Tons of tons of debate over what should happen to Intel. Of course, Intel, the storied chip manufacturer integrated, so they both design and manufacture the chips.
Speaker 1:They have a design arm and a fab. Very different from Intel and, from NVIDIA and TSMC. And it's crazy. Only one designs.
Speaker 2:One, if they had one different letter in their in their product, they would have been a trillion dollar company. If there was just a g instead of a c. A GPU instead of a c True.
Speaker 1:True. They really just one off and they You know they actually do make a GPU.
Speaker 2:Really? Yeah. You don't hear about it much.
Speaker 1:You don't hear about it much. It's not very popular. Anyway, so Lip Buuton isn't signaling a major departure from Intel's past strategy. When he came on, he said, hey. We're staying in the path.
Speaker 1:And and we were kind of debating, like, is this just signaling to employees, like, hey. Don't worry. Everything's gonna be the same. New CEO. You just keep doing what you're doing.
Speaker 1:We're gonna figure out a new strategy, but, like, your job is safe right now. Or or was he was he a genuine believer in the new in in the strategy that's been going on for years? He has long agitated for, hey. We need some change, and we think that's why he was brought in. But The Wall Street Journal has some new reporting that we're gonna take you through.
Speaker 1:And so he's only been on the job for two weeks, but time isn't really on Lip Bu Tan's side. Tan became Intel's new CEO on March 18 and has already started laying out some of his vision for the storied but troubled chip giant. In a letter to shareholders, he spoke of the need to up our game, we love that, To make Intel's products more competitive in the crucial market for artificial intelligence systems. He also said he was equally focused on building up the foundry business where Intel manufactures chips designed by other companies. Of course, Intel has lost t m TSMC on the TPU, on the on the Apple silicon chips, like on NVIDIA stuff.
Speaker 1:Like, they really they they they have a lot of great clients, and Intel can do a lot of it's called the trailing edge fabs. So think about the chip that goes in your car.
Speaker 2:Yeah.
Speaker 1:It doesn't need to be an h 100. It but it does need to be made with quality at scale. And so we saw a lot of this during COVID. A lot of the chip delays led to cars Think
Speaker 2:about it though. Daytona s p three with an h 100. Pretty hard.
Speaker 1:Pretty hard. I mean, you you're you're you're saying that, but, like, I actually would love a car that was able to inference Whisper and, like, know, ChatGPT or Grok or DeepSeek or something, like, in real time. So you can have a conversation with zero latency because it's all done in the car. Yeah. And there's enough energy in these cars.
Speaker 1:There's enough, like, power. It's possible. Just just burning a v 12 to power your h 100 while you're talking to your chatbot on the way to work. That's the future. That's great.
Speaker 1:Yeah. Hey. Make a to do list. Bot. It's great.
Speaker 1:But, yeah, I mean, this is one of the things we talked about with Tesla. Like, and Sam was saying, like, maybe Tesla and and XAI and X all kind of combined. And you could imagine that, like, if you're driving a Tesla, you want to have the best interaction with the AI features.
Speaker 2:And if you have autopilot enhanced by xAI, you could be using, you could be scrolling the timeline. Exactly. I didn't know this, but my buddy Ben Taft and I were hanging out over the weekend. And apparently with the Tesla autopilot, like they make you keep your hands on the steering wheel and they use a camera. Yep.
Speaker 2:And it used to be that you could hack it by like putting weights on it. Sure. So you could just sit there. Yep. But now having your car on autopilot, but being forced to keep your hands on the steering wheel and then it gives you strikes.
Speaker 2:Yeah. If you get five strikes, you can't use
Speaker 1:autopilot Yeah, yeah, it turns off.
Speaker 2:That's insane.
Speaker 1:Yeah, it's actually kind of miserable. But you get used to it and you kind of adapt. I mean Mercedes has lane keep assist, and you have to check-in with it every, like, twenty seconds and, like, give it a little wiggle or, like, hold your hands on it, basically. But you can take your hands off for, like, ten seconds. George Hots' Comma AI does not need you to touch the wheel at all.
Speaker 1:It's amazing. It just has a camera. But if you look down or you look at your phone, like, it will blink at you and disable. And so George has, I think, correctly argued that that is the future of self driving systems. Just camera on the face is the person paying attention.
Speaker 1:If they fall asleep, it detects that. Doesn't matter if their hands are on the wheel. If they're sleeping, that's not good. And so I think that's the future, I think that's where Tesla's moving. And I think that the situation that you described is very temporary.
Speaker 1:I think that we will see Tesla long term go to pure mechanism. Purely
Speaker 2:AI. But it's still
Speaker 1:Yeah.
Speaker 2:Still I thought that was hilarious.
Speaker 1:Yeah. Great.
Speaker 2:So I mean like, great.
Speaker 1:That's why most people we know just use chauffeurs and then they sit in the back.
Speaker 2:Yeah. That's
Speaker 1:right. Because the chauffeur will keep their hands off
Speaker 2:them. If the helicopter
Speaker 1:is not Yeah. Yeah. In other words, the same things Intel Intel's c last CEO was trying to accomplish. Tan reiterated those points in a speech on Monday kicking off the off the company's Intel Vision conference. Beyond a few aspirations, including better a better AI strategy and custom built chips for niche computing work, There was little to distinguish Tan's playbook from his predecessor.
Speaker 1:Stay tuned for Intel's plans on humanoid robots, he told the audience. Let's go. Let's go. In short, his short tenure on the job so far means Tan could well have more significant changes in mind, but one option Intel doesn't have is more of the same. Tan's predecessor, Pat Gelsinger, absolute dog, was effectively booted following an ambitious multiyear effort to both improve the company's chip designs and catch its manufacturing processes up to those proffered by TSMC.
Speaker 1:And, yeah, it's hard to do two things at once. They're trying to be a master of a jack of all trades, but they are a master of none right now. That effort hasn't worked or at least not yet. Intel's annual revenue has shrunk by 33% over the last four years. The once flush chip giant has been burning cash since 2022.
Speaker 1:The foundry business still mostly produces Intel designed chips, and it lost 13,400,000,000.0 last year. A little size gong moment of silence.
Speaker 2:Anyway This moment of silence is brought to you by Ramp. Save time and money.
Speaker 1:They need to be
Speaker 2:on RAM. Stable.
Speaker 1:For sure. One change that town hinted at is more wax at Intel's cost structure. The company reduced its workforce by 13% last year, but still employs far more people than any other company in
Speaker 2:the industry. Anakin meme. Yep. Where You were
Speaker 5:the best
Speaker 1:of us.
Speaker 2:Chip you're a chip company. You must have done great over the last five years. It's like, you must have done great. Right? And it's Intel down 60% the
Speaker 1:last five years. So
Speaker 2:rough. Yeah.
Speaker 1:That sounds like, so Tan involves listening more closely. His plan is to listen more closely to customers. That sounds like corporate speak, but it's meaningful at Intel, The Wall Street Journal. Pulling no punches. Decades of technical success and near monopoly on personal computer chips nurture the culture of of arrogance.
Speaker 1:An Intel recruiter who interviewed Gelsinger for his first stint at Intel out of a technical school called him somewhat arrogant and noted he'll fit right in. He'll love it. But Intel's been in trouble. Look at this graph of people power annual revenue ever annual revenue per employee. NVIDIA is putting up historic numbers, 3,500,000.0 per employee.
Speaker 1:Intel is down at less than 500 k per employee in revenue. That is not
Speaker 2:Right there with the manufacturer of the t I 84,
Speaker 1:Texas Instruments. Texas
Speaker 2:Instruments is one of the best corporate
Speaker 1:It's a great company. Also, the founder of TSMC worked at Texas Instruments, was passed over for a senior role, then went to Taiwan and was like, you guys wanna do this? And they were like, yeah. Yeah. It's great.
Speaker 1:And Texas Instruments manufactured calculators, but they also manufacture, like, weapon systems and stuff like Yeah.
Speaker 5:Yeah. Yeah.
Speaker 1:Yeah. It's a it's a great it's a great company. Yep. But it's very funny that they also make the calculator. What so far has been absent from Tan's strategy is a deeper shift into Intel's business.
Speaker 1:Andy Grove, the storied Intel chief who mentored Gelsinger, would have called the current AI wave a strategic inflection point that required decisive action, much like when Intel itself abandoned making memory chips in the nineteen eighties. Back then, Japanese producers were making memory more cheaply, rendering it unprofitable for Intel. So Intel took up the then nascent market for personal computer processors, effectively an entire entire pivot for the business. Andy Grove famously said, only the paranoid survive. And he was very aggressive about
Speaker 2:steer Was he the original person to say that?
Speaker 1:Yes. That's his quote. And that's like the name of his book, I think.
Speaker 2:That's why. Yeah.
Speaker 1:He might have gotten it from somewhere else. He probably stole it from a group chat. One of his boys No. Was like, no. That's mine.
Speaker 1:That's my banger.
Speaker 2:Of Slootman's boys is like, Frank, you guys gotta amp it up.
Speaker 1:You gotta amp it know what I do? Yeah. I
Speaker 2:write a book about that.
Speaker 1:Yeah. Today, some analysts
Speaker 2:Some things are
Speaker 1:suggest Intel should split off its manufacturing operations from its chip design and marketing functions following a long established industry trend of the the fabless designer or the pure play fab TSMC or NVIDIA strategy. It could gather outside investors in the manufacturing operation to bring in more capital, something the company is already in talks to do. Investors so far have welcomed Tan, sending the company's stock up 10 since his appointment in March. And so if you have a take on Intel, if you wanna get exposure to Intel, you wanna go long or short, you gotta go to public.cominvesting for those who take it seriously. They got multi asset investing, industry leading yields, and they're trusted by millions.
Speaker 1:And Public is the sponsor of our ticker. You'll see at the bottom of the show all day long. Crypto. Let's stay with Intel for a little bit more. Intel's new CEO plots turnarounds.
Speaker 1:We need to improve. So this is heard on the street. A little bit more information here.
Speaker 2:We gotta get some audience members stocks up on the ticker.
Speaker 1:Oh, That'd be
Speaker 2:great. Certainly have some Yeah. Public market corporate athletes.
Speaker 1:Some Oscar maybe?
Speaker 2:Yeah. Yeah. Yeah. That'd Some Palantir. Yep.
Speaker 2:Of course.
Speaker 1:There's there's some other good ones. I think Tesla's up there right now. Friend of the show. Thank you, public. So so in Las Vegas at the Intel Vision conference, there was one more line that stuck out.
Speaker 1:We will ref we will redefine some of our strategy and free up the bandwidth, Tan said. Some of our noncore business, we will spin it off. And so people are wondering what businesses are technically noncore, what will he spin off. And now Intel is leaning into AI, which will include humanoid robotics, which Tan said has the ability to redefine manufacturing in the future. And they also need to regroup its existing pool of workers while attracting new talent with a clear vision of the future.
Speaker 1:But, I mean, as I was thinking about this, like, if I was sitting down with Lip Bu Tan, the new CEO of Intel, and I had, like, one piece of advice for him, I would tell him to go to ramp.com because time is money, and he should save both right They're losing they lost $13,000,000,000. I mean, if they had easy to use corporate cards for payments, accounting and a whole lot more all in one place, I think we'd see a very different intel for sure.
Speaker 2:Yeah. They're focused on what should we spin out? Should we focus on design on the fab side? Yep. They really need to be focused on on, you know, immediately just like, you know, just basic financial operations and and one of the most basic things you can do.
Speaker 2:Yeah. But intelligent things you can do.
Speaker 1:So we should have the c CFO of Intel on the show, give him the pitch for RAMP, really dial it in, understand what corporate card is he using, and is that the source of all of Intel's problems? It's possible.
Speaker 2:It's very possible.
Speaker 1:Well, I would like to see this bet expressed. I I wanna know if Intel's gonna split this year, and I hope we can get a Polymarket set up for it. I don't know exactly how we would define that, but we're happy to announce that Polymarket is now an official partner of TBPN. And we want to give a shout out to the ticker down at the bottom. The Polymarket ticker is powered by Polymarket.
Speaker 1:We, had a friend of ours But, yeah, they're confused now. Write some code and pull in their API, and, we will be iterating on that and hopefully hopefully making it, better and better over the
Speaker 2:next few no market on Intel? Intel spin out.
Speaker 1:I would love to see it.
Speaker 2:We will work on getting one set up.
Speaker 1:I mean, you can kind of express that in the public market, but I love Polymarket for just defining these, like, narrow events Yep. And just creating another information source. And underrated, there's a great conversation section that happens in the comments. People debate these things. And Polymarket does a great job of surfacing interesting markets on their x account and Yeah.
Speaker 1:Online when things are shifting, when things are trade are are are changing. So highly No.
Speaker 3:It's it's
Speaker 2:an alternative way to traditional news to understand the current thing, which is, you know, news, you go on, you know, New York Times, they're being very sensational about It's not really oriented around truth. It's oriented around attention. Yep. And Polymarket is
Speaker 1:It lets you know what's actually important. I was talking to Shane about this. I was like, I want a newsletter from Polymarket. Maybe we should kind of use our show as like a version of It's like, I want to when a market moves significantly in the Polymarket tech markets, when there's a big jump in the prediction, Like, there was recently a big change in the the the expected best LLM by the May. And I think it flipped from OpenAI to Google really quickly, really suddenly.
Speaker 1:When something like that happens and the odds shift all of a sudden, that's when I wanna read the news story. That's when I want the deep dive. That's when I wanna go deeper on that story. If if we're just seeing everything's humming along at fifty fifty, I don't need another breaking news story about it because that's probably just a press release that's pitched by some PR person. Right?
Speaker 1:Nothing's really happening. Really news. And so I say it's not news unless it's moving the market. And that's my thesis. But you know what else is moving the market?
Speaker 1:Courses.
Speaker 2:That's right. That's right.
Speaker 1:This this story has shaken the world of technology journalism to its core. And I think a lot of tech journalists are gonna be very excited to hear this. So, the horse wore Hermes inside a luxury show jumping competition. If you're a technology journalist and you grow up doing dressage or or show jumping, like, are gonna wanna pay attention to this because it's a really big deal. There was an annual equestrian event in Paris, and the French brand has been keeping in touch with its saddle making roots.
Speaker 1:This is a big deal for everyone who follows horses and show jumping, obviously, pretty much everyone in tech journalism.
Speaker 2:Yeah. And for those that don't know, Hermes has always said the horse was the first client.
Speaker 1:Yes. The former chair of Hermes, Jean Louis Dumas, used to say, the horse is the first client. And that's because horses were for the French luxury brand's first market in 1837. They're not starting new companies that have that much of a lineage today. When it opened as a harness maker
Speaker 2:They hopefully will. We don't get paper clips.
Speaker 1:Yeah. They are also its original muse, inspiring stirrup inspired closures on bags, designs on silk scarves, and in the company's horse and carriage logo. Last week, that connection was on vivid display under the Grand Palais glass ceiling where some of the top showjumpers in the world competed for branded rosettes and a €400,000 grand prize. The Sao Hermes is Sao's gone. Sao's
Speaker 1:gone. The Sao Hermes is among the most challenging equestrian competitions in the world with hurdles reaching 1.6 meters or just above five feet. Sao is French for leap. The three day event is also a display of everything Hermes, a reminder that in addition to selling bags, scarves, and coats, the brand still outfits top riders. In in a luxury downturn, when many brands are cycling through executives and creative directors, Hermes posted a 13% increase in sales in 2024 compared to 2023. I think it's probably entire go.
Speaker 1:And I think it's probably entirely driven by increased increased demand from technology journalists. Right? You you have to imagine that their power in the luxury market.
Speaker 2:Downturns. Downturns.
Speaker 1:Of course, because all the family money. Yeah. Its power is in its heritage, which appeals to athletes as well as fashion clients. It's one way to show that our equestrian roots are very much alive, not just a narrative, says the managing director of Hermes' equestrian category or metier as the brand calls it. 110 horses with names such as Hello Chandora Lady, Al Capone de Carmel, and Cocaine Duval traveled to the to the temporary stables along the Champs Elysees.
Speaker 1:Their white tint their white tinted enclosures were lift were lit with the same round chandeliers that that decorated the Palais. They could warm up in an enclosure right near the intersection on view for passersby who didn't get one of the almost 17,000 tickets sold for the weekend. Even the most technical elements were decidedly fashionable. Equestrians leapt over obstacles resembling the brand's h signature in the house colors orange and brown or the facade of the brand's flagship on the Rue De Fauburg. I can't even pronounce that.
Speaker 1:Less than a mile away. Anyway, very fun story. Little bit of, more information that I liked. Best known for its colorful scarves and leather handbags, Hermes famously makes the Birkin bag, but they also sell equestrian accessories, and these are some things that tech journalists in the audience are gonna wanna pick up. They sell $1,200 breeches and 460 for a felt sugar box.
Speaker 1:And that's a box that's $460. You open it, and you store your sugar cubes in there. So when you wanna give your horse a sugar cube, you have it in a nice presentable box.
Speaker 2:Yeah. And for those that don't, you know, horses are highly opinionated about this type of thing. You try to give them sugar out of a regular old plastic Tupperware. They are gonna be pissed.
Speaker 1:They're Well, that's a microplastics thing. Yeah. You don't wanna be giving a horse, but they're not gonna have
Speaker 2:an enjoyable ride.
Speaker 1:Definitely not.
Speaker 2:Giving them, you know, a Tupperware sugar box.
Speaker 1:Definitely not. Anyway, speaking of someone with as relentless as a drive as a horse, we got David Senra in the building.
Speaker 2:He actually has a he he told us just before joining, he's got a good horror story. Oh, So I'm gonna start there.
Speaker 1:Okay. Tell us the horror
Speaker 4:What's up, guys?
Speaker 1:Good to Hi,
Speaker 2:What's up?
Speaker 1:Good to
Speaker 5:see you.
Speaker 2:Can you Did you get some more book? Did you
Speaker 3:get some
Speaker 2:more books since last time or it's just a different angle?
Speaker 4:No. I'm using a different camera.
Speaker 1:Okay. It looks good.
Speaker 4:So I absolutely love, the horse story. I was talking to, one of the most successful tech company CEOs a few months ago, and he was telling me how stupid he thought it was that his wife picked up a horse habit. And there's a I guess there's like a famous, like, horse trading training facility in Wellington, Florida, which is like the middle of nowhere, kinda by like West Palm. Mhmm. And he goes, this is I thought it was the biggest waste of time.
Speaker 4:And then he shows up at this event and, like, Michael Bloomberg's there and, like, the guy from Goldman Sachs is there and, like, all these such, like, fabulous success. Yeah. Yeah. It's like, oh, maybe this isn't a waste a giant waste of time.
Speaker 1:If the horse even if a horse costs a million dollars, you get one deal done at that equestrian event, and it pays for itself.
Speaker 4:So I I I read two books, last week. One was Terrible White, which I told you about, and then one was good but not episode worthy.
Speaker 5:Mhmm.
Speaker 4:So I had to republish an old episode. And I republished an old episode about this guy named Daniel Ludwig who was the richest man in the world in nineteen eighties, and no one knew his name. There's no pictures of him anywhere.
Speaker 1:Wow.
Speaker 4:And he made his money the first way he made his first fortune was hauling oil, and then he had huge cargo ships. And he was getting smoked for contracts for, like, because the Onassis and all the the other Greek shipping magnets would build the most the biggest yachts in the world, and then they would invite the people they want to sell onto the yacht, and they got all of the contracts. So Daniel's like, okay. I'm gonna do the exact same thing. He didn't even he just worked all the time, so he didn't even go on the yacht.
Speaker 4:He let them use it, and he said later on that he made more money from that ship than all of his supertankers combined.
Speaker 1:Wow. Insane. Yeah. It's amazing. Jordy, what what should we talk about the Ken Griffin episode that's coming up?
Speaker 1:Are we leaking that?
Speaker 4:No. No. I I just finished it this morning. Fantastic. Can't wait.
Speaker 4:I haven't eaten anything and I'm on like six six hundred. I had three commenters so far today. So it's like just under like I'm shaking right now.
Speaker 2:I'm glad.
Speaker 1:Lots of
Speaker 2:caffeine, etcetera. To match your energy.
Speaker 4:But I got it out. What happened
Speaker 1:Tell about the inspiration. Tell tell us about why you did the episode. Obviously, everyone knows Ken Griffin, founder of Citadel, but, what inspired you?
Speaker 4:Because I study psychos for a living, and I love them. And they're the most interesting, fascinating people to me. And so you guys actually made a video on this tweet that got like a million and 1,500,000 views. Right? It's John Arnold talking about, hey.
Speaker 4:What did Ken Griffin do when Enron blew up? And he talks about just, like, you know, a lot of people are, oh, yeah. I know Enron was making money. All like, all these this talent is gonna, like, you know, essentially go to the wind. You know, some people will recruit.
Speaker 4:Maybe they'll build, like, a good commodities business. And he tells a story where the day it blows up and keep in mind, in this story, Ken Griffin's, like, 33. He's, really, really young.
Speaker 5:Yeah.
Speaker 4:And he chart immediately charters a Gulfstream jet, goes to Houston, and interviews every single person that was important in the Enron trading business.
Speaker 1:Mhmm.
Speaker 4:And then he winds up hiring all the best talent. And in the in the talk that I used for the basis of the episode, because there's no biography written on Ken, he goes, and since then, we've made about $30,000,000,000 trading commodities.
Speaker 1:Yeah.
Speaker 4:So the main reason I I I wanted to to profile him is because I get to meet a lot of really interesting people because of the podcast, and I always ask the same questions. It was like, who's the smartest person you know? Who has the best business you know? And if they're into finance, even if they don't know each other, they they I kept hearing Ken Griffin, Ken Griffin, Ken Griffin. And they would say two things about him.
Speaker 4:They say, he's a winner and he's a killer. Mhmm. And you just find all these stories, you know, they're not in a book, but they're just like spread across the Internet. They might be in other books of just him taking something, you know, he just took it to the next level. And then, John Arnold, the one thing where I knew as soon as I read that line, I was like, oh, I'm gonna do an episode immediately, is John's like, listen.
Speaker 4:I'm not gonna take a job, but I respect Ken. So, yeah, I'll talk to him. I'm heading to Aspen for an event. Tell him he can call me when I get back to Houston next week. And then Ken's assistant calls back like two minutes later and is like, hey, Ken, would you talk to Ken if he flies to Aspen tomorrow?
Speaker 4:And he's like, yeah, do So it's just like there's this reoccurring theme through all these biographies of Hitches, Grace, Entrepreneurs. It's like how bad do you it? And I think reading these stories like really stretches like what's possible.
Speaker 2:Did Ken have a jet at that point? Because you got a discount that like quick flight a little bit. Already had a DJ and it was just more
Speaker 1:He flew Southwest actually, standby.
Speaker 4:If he didn't own the jet, he was definitely chartering the jet. I don't if he owned it, yeah, he was
Speaker 1:not Still meaningful. Find the story of Ken Griffin in college getting the stock tips from State Street? Do you did you cover that?
Speaker 4:No. No. The one thing I heard was that he convinced Harvard to let him install like a satellite dish so he could get real up to the minute data. He was the only person at Harvard that had this data flow.
Speaker 1:So that so so so that was true for the for, like, the publicly traded companies that he needed stock prices for. But he also had this interest so he's trading convertible debt at the time. And and convertible debt, it doesn't just have a ticker that you can just look up online or call someone. You actually have to go to a trading desk and talk to a sales and trading guy to get, like, hey. What's the market trading that particular bond at?
Speaker 1:Because it's not incredibly liquid. And so what he would do is he would take the the red line from Harvard in Cambridge down to Boston, go into State Street, which was one of the largest it still is, like, huge global asset manager. He would bring flowers for the ladies at the front desk and sweet talk them and and kind of just be like, oh, like, so good to see you, Susan. Like, you're the best. Like and then he would walk through and then just walk the the the desk.
Speaker 1:I don't know if this is apocryphal. They just kinda told this story at Citadel. Yeah. But but they would he would he would just walk, and then he would just, like, tap a guy on the shoulder and be like, hey. Like, what what's the spread on the the convertible debt on Microsoft today?
Speaker 1:Or something like that. And then he would get the quote Yeah. Go back, run his pricing model, and then decide whether or not he should actually put it in order. It's, like, fast.
Speaker 4:Everything he does everything he does is like that. That's why it was just so fascinating because, like, he just takes the extreme this idea of, like, buttering up, you know, the gatekeepers inside of a business, like Oh, yeah. That David Geffen's biography, he talks about he had a huge advantage doing that. Michael Ovitz, I just did two episodes of Michael Ovitz, same situation. Like, they're always able to obtain information that other people can't because there's somebody in the way, and they just figure out how to get around that person.
Speaker 4:And it's usually through, like, acts kindness.
Speaker 1:Dude, did you get into the story of the IP theft that happened in '20 No.
Speaker 4:No. So, basically, there's no biography of Ken.
Speaker 1:Right?
Speaker 4:And so, I watched every single interview I could find with him. But the problem is, because it's him, most of the interviews, they want his It's like, they're all timely. I was looking for like timeless.
Speaker 1:Sure.
Speaker 4:Sure.
Speaker 1:Sure.
Speaker 4:And so it's like, they're like, hey, what do think about this political candidate? Or what do think about the market now? And so the only thing The the best thing I found was this talk at Yale.
Speaker 1:Sure.
Speaker 4:The guy interviewing him, kinda horrible. But I transcribed that talk at Yale Yep. And then went through the transcription so I could do the And then there's another thing. It's this book right here that I found this because Josh Wolfe talked about, I think Ken was an investor in the first fund through Josh Mhmm. And that that Ken recommends this book and I think he makes people or he strongly suggests people at at Citadel read it.
Speaker 4:And so I read that to get context because I think if you listen to if people tell you, hey, this book is really important to Mary, know, there's like five books you have to read, it gives you an insight to their personality. Right? And that that book is lit the subtitle is, are you playing to play or are you playing to win? And think about the the the tweet from John Arnold. Like, was he playing to play or is he playing to win?
Speaker 4:He was playing to win. And so I think there's a lot of analogies in the Yale talk that he gives where he gives a brief overview his his career and then he gives a bunch of like principles for you to apply to your career, which I thought was more of like a founder's episode.
Speaker 2:Can you talk about what what it means for Ken to be a killer? Like, really getting into like your definition of that? Because you described him as like a winner and a killer, but I think killer can have plenty of different definitions. You can think of a killer as like somebody that just like figures out how to win and is aggressive, but then I I still think of someone like Eric Lyman as like a killer, but he's also extremely kind. And so it's possible to be like kind and a killer.
Speaker 2:How how do you describe Ken Griffin?
Speaker 4:I I think every single person that I've profiled on Founders is a killer. And I mean it this is not a pejorative. I I mean this is like they take what they're doing very seriously. The the greatest I I this idea really stuck in my mind because there's this biography of Bernard Arnault. It's called, like, the taste of luxury.
Speaker 4:It's, like, really hard to find. I think it's, like, $3,000 online. And because there's very few biographies of them in English. Right? And this one ends when Bernardo is like 40.
Speaker 4:And at the end of the book, he calls his shot. And you guys were just mentioning earlier. I heard you on the show. It's like, well, you wanna like do a competitor Rolex or some kind of luxury brand, like, you had to start a hundred years ago. Yeah.
Speaker 4:And Bernard had that insight, you know, forty years ago, thirty five years ago, and he's just like, oh, these things are very valuable. I think they're gonna get more valuable in time, and I have no competition because you have to start them, you know, two centuries ago or two generations ago. And but there's a line in that biography, I forgot, where they're describing him, and they say only killers survive. So what I mean by killers, like, we I was actually out to dinner in Miami on Saturday sun Saturday night, the night before I was gonna record the this episode. And I mentioned somebody came over from the table to let me know.
Speaker 4:And he was asking me, he's like, oh, like, what's the next episode you're working on? And he works for another fund, hedge fund, and the guy that owns that fund is really close to to Ken. And so when I brought up them doing Ken, he's like, oh, I have so many fucking Ken stories. And one thing that he told me that this is an example of, like, a killer, away from, like, your competitors. Right?
Speaker 4:Is that there's, like, a there was, like, a a tiny business inside of Citadel that had something to do with, like, the new technology they were developing. And it so he got essentially, Ken sat and talked to the guy for, like, a few hours about, like, tiny details, like, where where are the servers? Like, what are we doing with him? Like, he was just completely obsessed. He has no other hobbies than just building this massive empire.
Speaker 4:And then one thing that I think is directly related to your question, Jordy, is, like, in the the talk, he's just like, you you don't wanna win. Like, you want a landslide. You wanna beat your competitors so bad that they do not survive because if you let them survive, they will come back and you don't want them to come back again. And when I got to that part of the transcript, I'm like, this isn't this is not just how Ken thinks. There's a great line in this book called Event and Wonder, which is the collected writings of Jeff Bezos.
Speaker 4:So somebody, Walter Isaacson collected all of Jeff Bezos' shareholders and then transcribed and edited all of Jeff's important speeches. And Jeff has a great line in there. He's like, do you really wanna prepare for a future where you might have to fight somebody as good as you? And he goes, I don't. And so the same thing.
Speaker 4:It's like, I don't wanna win two to one. There's a line in Hardball. It's like, you need to fuck winning two to one. You need to win nine to two. You need to stomp them.
Speaker 4:I just had dinner with I I could say this because it's public. I Mark Lohrey. And he was the one that founded diapers.com. And, you know, I and then he went on to sell J. T.
Speaker 4:All this other stuff. But I was lighting him up with questions. I was like, I wanna know what it was like competing against Jeff
Speaker 1:Bezos Yeah.
Speaker 4:At that time. And he was just like, you can't. There is no comp he's going to steamroll you. I had no choice.
Speaker 1:That's amazing. I mean, we we like, the the killer mindset, with Ken Griffin just takes me back to 2012. This this trader on the on the, on the quant team stole code for they they called them alphas, basically code that no matter who runs the story, they're gonna generate return. Stole a bunch of stuff on, like, a hard drive. Citadel finds out, realizes that they're stealing the IP, the code.
Speaker 1:The guy freaks out. He dumps the hard drive into the into the river, in Chicago. And Ken Griffin and the Citadel team get scuba divers to go into the river and bind the hard drive just to send it to the FBI to get this guy busted. And I was like, Like, he's not gonna let you just take his his alpha. Like, he's just not gonna happen.
Speaker 2:Like flying in scuba divers.
Speaker 1:Yeah. Flying in scuba divers. He's like The Caribbean. Basically.
Speaker 4:He he talk he talks about that in the talk, didn't include in the podcast, but he's like, listen, if if somebody's gonna leave because, you know, you you work at Citadel and then you wanna work, you wanna be a doctor, I'm gonna write you, you know, a letter of recommendation. I'm gonna support you in every way. You leave for competitor and that just brings up different, very different feelings within within me. So that's what I mean. It's just like he's completely bought in.
Speaker 4:I think the important point here
Speaker 2:How does he think
Speaker 4:is like go ahead.
Speaker 2:On that note, and maybe you can just wrap this into the story you're about to tell, but how does he think about investing in other funds? Right? Because in many ways, funds may start with a singular strategy, but in a long enough time horizon, the manager says, well, okay. I'm gonna run an empire now. I'm gonna do the Ken Griffin playbook.
Speaker 2:We're gonna have a bunch of different verticals.
Speaker 1:He's invested in some venture funds.
Speaker 4:He he's invested So I think maybe it was on your show. I forgot who told me.
Speaker 1:It's kinda not
Speaker 4:Oh, no. No. I know who who told me. Like, you'll see this over and over again. When people start to to understand how valuable their industry is, they essentially get in all the good deals.
Speaker 4:So, like, think about, you know, Nvidia's in every single AI anything right now. Yeah. I just did an episode of Jerry Jones, and I talked to a bunch of people that live in Texas, bunch of people that are around him, and they're like, yeah. Yeah. The Cowboys is one thing.
Speaker 4:His early oil and gas stuff, he's like, you don't understand what he's been doing, the investment he's been doing since then. He's in every single thing. Yeah. I don't know. I didn't track like how I just hear over and over again that he's got money everywhere.
Speaker 4:And I again, these are private companies so you never know, but what I was told and you kinda see this because if you if you come to Miami, he's like buying everything. Like Yep. Every like not just like literally everything. He's building I think what's gonna be the most expensive
Speaker 1:house in history.
Speaker 4:Yep. And and Palm Beach and so what I was told is forget the enterprise value of the companies, which is, you know, who knows, 80,000,000,000, whatever the number is. Right? He's like, the guy's been pulling out billions and billions and billions in cash. He just doesn't know what to do with it.
Speaker 4:So, I'm sure he's invested in everything. I heard stories about Bloomberg. I went up talking to somebody that knows a lot about his family office. And, you know, I was asking questions around this, trying to get a sense of how large his fortune is. And he was like, well, I can give you a little hints.
Speaker 4:Like, look how much we spent x on charity. We have this many people in the fucking family office, and you kind of like piece this together. It's like, oh, well, what are you gonna do if we're making $5.07, $10,000,000,000 in cash year after year after year? Like, eventually
Speaker 1:Fight for Mercedes AMG one allocation, of course. But the entire allocation I mean, it it is crazy how, like, how diverse the Citadel team has become because they have this high frequency trading arm. I mean, they started with convertible debt. They kind of like, the story of Citadel is, like, he got so good at trading convertible debt that he just straight up maxed out the market size. Like, he he was tam constrained So so on that and then went into high frequency trading and then got the global equities team up and running.
Speaker 1:And the equities team, when I when I was there, they were doing, like, 2,000 CEO interviews a year or something, just, like, interviewing every single person that runs a public company, getting the temperature, figuring out, oh, do we like this manager? Do we like this leader? Should we invest? And then on the other side, you have the the HyFreezy guys who are like, I haven't talked to a human being in months, and it doesn't matter. And they're both printing.
Speaker 1:They're both absolutely printing.
Speaker 4:Yeah. What I love too about, like, you know, him maxing out markets earlier in his career, that's something he he he takes q and a from the Yale students at the end. They're like, how do you decide, like, what business to go in? And he's like, you have to think about total addressable market. We have to be in deep liquid markets Totally.
Speaker 2:Because we're gonna do the
Speaker 4:best research on the planet, and we need to be compensated for for this research. Another thing that I absolutely love that I think is really important too is he talks about how much of an influence that mentorship and apprenticeship is gonna have on your career as a young person and as you continue to go. And this is something I see over and over again. And there's this book on my desk, like, it's impossible to find. It's called you actually like this.
Speaker 4:It's autopsy of a merger. And it's this deal that Jay Pritzker did. One of his most infamous deals. And what I thought that was interesting is, like, I remember reading Sam Zell's autobiography and then getting to speak to Sam. And Sam Jay Pritzker was like Sam's older brother and mentor.
Speaker 4:And so he he Sam said that Jay had the greatest financial mind of anybody that he ever met. And so I was like, okay, well, Sam is studying this guy. I need to study this guy. But I wanna go back to the the point I was about to make. It's you know, think about every single person covered in founders.
Speaker 4:It's like, they they were so good at their job that somebody wrote a book about them. This is like the smallest percentage of the people that have ever lived. But there's so many like, if you take all these lessons as attractions, you can apply them even to, like, businesses far afield. This is actually advice that that Ken gives, where he's like, as an entrepreneur, you need to be looking for edges. So obviously, you study inside your company, you study your competitors, but then you study businesses that are far afield.
Speaker 4:So he talks about he had he he built this thing called a risk wall in Citadel. Mhmm. It's 30 feet by 10 feet. It's a giant screen. And he says before, they were they were getting like a b, b plus on how well they were managing risk.
Speaker 4:And, you know, Ken's not gonna be satisfied with b anything. And so he actually is in the office of Saudi Aramco in Saudi Arabia, and he sees that they have this giant fucking board that's like 30 feet long, 10 feet wide. It has all the important metrics like where the ships are, how much they're producing, everything else. He's like, oh, I'm gonna take that idea and I'm gonna apply it not to ships and oil, but to getting getting all of the data I need so we can start managing our risk better. And he says that one idea had him go from, you know, b, whatever the case is, to, like, one of the best, like, risk departments in the world.
Speaker 4:So the reason I think about him is because I don't know if you guys told Truman to do this, but I have got been getting some threatening emails and text messages. I'll tell the audience because they might not know. I'm getting, like, calls from Jordy at, like, 05:30 in the morning California time, just letting me know he's on his way to the to the studio. I'm getting text messages from John at 05:45. I get a text today and it's all I see is the back of John Jordan, I think the rest of the TPBN crew.
Speaker 1:Yeah. And
Speaker 4:they're go they're doing they're doing curls till failure. And then this is the the caption. They're like, TPBN coming for your neck.
Speaker 1:It was a good time this morning.
Speaker 2:I mean, that's every morning.
Speaker 1:Yeah. I mean, podcasting never sleeps.
Speaker 2:Podcasting never Yeah. But you
Speaker 4:guys are taking it, you know, to a completely different level.
Speaker 1:It's the chicken finger Our
Speaker 2:our killer mindset is it's not enough for us to be the biggest. Other people need to quit.
Speaker 4:We'd like
Speaker 1:to say, never podcast weekly, always podcast strongly.
Speaker 4:I love that line.
Speaker 2:And that's where we train every morning. We talk about like just frameworks for focus. Covered a little bit of this, but is there any Ken put anything out there around like how he's evaluating? Like, you know, they're running all these business lines, and then he's got somebody that comes to him with an opportunity, you know, internally. He's evaluating it based on the market potential, but is there anything, like you know, he didn't roll out every business line in the same two years.
Speaker 2:It was sort of staged. Right?
Speaker 4:Okay. You you just accidentally hit on, like, something that's very important. Put in the episode description is, like, one of the other reasons in in in addition to, like, all these crazy stories let me let me back up. Don't let me forget where I'm at. But the first time I came across Ken, because I don't know any like, I just I don't pay venture finance.
Speaker 4:Right? Now I do because I watch t t p v n. But reading
Speaker 1:You don't need a fantastic liquidity provider like CDL Securities out in the market?
Speaker 4:You gotta get your
Speaker 1:size This
Speaker 4:is one of my favorite books. Okay? I think it's episode two twenty two of Founders. I think it maybe did an episode, like, in the eighties, like, you know, six years ago or whatever. It's Ed Thorpe's biography of Man of Alt Markets.
Speaker 4:And Ed Thorpe is a fascinating character, legit genius. And he was the first person the person that made the first quantitative hedge fund ever.
Speaker 1:Yeah.
Speaker 4:He built the world's first wearable computer with Claude Shannon. He was the first LP in Citadel. So the the first way I came across Ken Griffin, the way I was exposed to him, was at the end of the up the book, Ed Ed Thorpe's like, he closed down his his hedge fund. Right? He already made more money he could ever possibly spend.
Speaker 4:It's called Princeton Newport Partners. Mhmm. And what happened is Ken Griffin was 19 years old, comes to Ed Thorpe's house, then Ken Griffin's mentor is with him. And, essentially, he's like, I always wondered how far Ed Torb was like, how far can I go with the strategy? Like, I wonder how far I could have gone.
Speaker 4:I wasn't the right person to pursue it, but I wonder if somebody did. And he's like, well, Prodigy came over and I gave him all my files. This wasn't publicly available information. You couldn't get it anywhere, to the point you were making earlier, John. And then he's like, I wind up being their first LP, then you fast forward and look like, I think the bookends Penn's network is 6,000,000,000, and they have, like, 15,000,000,000 of SSR management or something.
Speaker 4:And now it's, like, you know, multiples of that. But I think, like, one of the, like, I guess the the way I I think about this is just, like, I obsess with people who do things for a long time. And so even Ken says, Ken founded Citadel thirty five years ago. He's founded Citadel Securities twenty three years ago, if I'm not mistaken. Right?
Speaker 4:Yeah. And he's like, we made more money in the last four years. Like, we are making more money today than we ever have in the past. And he talked about how this maybe give you insight on how he's able to scale up all these different businesses. One, they take a long time to do.
Speaker 4:But also, he said he had last year, he had over a hundred thousand people apply to work for him. Wow. And so you just have a huge labor like, you have all the talent in the world. Now you have all these resources, and you just build it slowly like, slow over time over many, many decades.
Speaker 1:Yeah. Citadel Securities, I originally I think you Ken was thinking about doing an investment bank and then eventually spun that out and and sold that arm of the business, and now it's morphed into a to a market maker. And it's a it's a very the business itself has been through pivots much like the core hedge fund. I do have a question about the core hedge fund. There's always the question of as you scale up a fund, size and outside outside LPs.
Speaker 1:You put up some amazing numbers, 20% returns. Everybody wants in. Is that sustainable when you take it on order of magnitude bigger? Did you get any insight into how Ken Griffin thinks about scale and outside investors? Because at the end of the day, you know, it's his fund.
Speaker 1:He wants to grow it, but sometimes having a bigger fund is advantageous.
Speaker 4:No. I didn't get like, there's a couple things where I was kinda getting frustrated with the interviewer because it's like, Ken says, he made he's the most successful hedge fund of all time, which one I asked a friend of mine who knows about this. Like, I thought Renaissance, like, why like, what's in it? He's like, yeah, but most of that was, like, their own money. Like, they Yeah.
Speaker 4:Closed the Medallion Fund, they closed off to outside investors a long time ago.
Speaker 1:James Streets,
Speaker 4:same Yeah. And then the other thing Ken said that they I thought, please ask I heard you earlier when you guys were going over the Wall Street Journal article on OpenAI, where you're like, follow-up. God damn it.
Speaker 3:I know. Why aren't you
Speaker 1:following up? It drives
Speaker 4:me fucking It drives me crazy. Like, I'm gonna have to do an interview show now Yeah. Just so can get these questions out of her. But he he mentions this that like, he had made more money than anybody else, but there's also years where he says, I lost over a hundred billion dollars.
Speaker 1:I was there. I had when when I came in, they during my, like, training seminar, they, the the guy's like, oh, yeah. We had a really rough go during the financial crisis, during the housing crisis. Our fund lost 50% of its value. And then he was like, and I went outside the next year, and I was talking to my neighbor, and he was like, oh, how's it going?
Speaker 1:Any better than than next year or than last year when you lost 50%? He's like, actually, we're having a great year. We're up 50%. And and the and the next door neighbor was like, oh, that's great. You're back to where you were.
Speaker 1:And the guy was like, no. That's not how it works. Like, you have to go up a %. And he was just, like, kind of exposing, like, basic math that's basic math illiteracy and, like, the pop general population. I thought it was an interesting, like, anecdote.
Speaker 4:So so there is an interesting, I think, principle to take away from that is because like, you know, likely most of us three and people listening are not gonna have, you know, a hundred billion dollar loss in a year unless Masa is listening. That at some point,
Speaker 2:just But Everyone have the opportunity to lose a hundred billion
Speaker 1:Exactly.
Speaker 2:At some point.
Speaker 4:This goes back to again, just being really gifted at a young age because he says that one thing that helped him very well is going to the proverbial scene of accidents. So when other funds or
Speaker 1:other Well,
Speaker 4:there was a Yeah. But the Enron happened in 02/2001.
Speaker 1:Yeah.
Speaker 4:There was a story in '98. So keep in mind, at '98, Ken is 30. K? 02/2008, when when you're describing when he loses half of his equity, he almost goes out of business, which he talks about in the I talked about in the episode, he talks about the talk at Yale as well. He was 40.
Speaker 4:But he says what saved him from going out of business in 02/2008 is when he was in '98, when long term capital management broke up. He went and he's like, how the he's like, you guys lost 90% of your equity before like, once they crossed the 90% threshold, then they lost control of the business. Like, how the hell did you not lose control before then? He's like, most likely, it shouldn't have happened.
Speaker 1:Yeah.
Speaker 4:And he said and again, this is another thing where the the the interviewer didn't follow-up. He's like and he's like, so what I learned from that was pivotal for me not to lose control of Citadel, not to go to business. And it's like, okay. Well, the follow-up question is like, what did you learn? But he never he just talks about going to the scenes of the crime.
Speaker 4:He didn't actually specifically say, hey. This is I learned x y or z.
Speaker 2:Do you think that Ken has another forty ish years in him in the same way that the the Warrens and the and the Charlie's Yeah.
Speaker 3:It's a
Speaker 1:good question.
Speaker 2:Or is he building his is he building the biggest house, the most expensive house ever, and he's just like, you know, gonna post up there and and ride it out?
Speaker 4:One of the things, again, like, I I picked the people I cover very carefully because, like, I wanna be inspired by them. And one of the things that I really loved at the beginning of the research was I would watch some interviews and for hours, and literally only just, like, pull out one line. And was that he says that he's been obsessed with the stock market since the third grade for reasons I don't entirely comprehend. That's the line. He doesn't even understand.
Speaker 4:He's like, this there's a great line that Jeff Bezos says that I believe in where it's like, we don't choose our passions. Our passions choose us. And Ken clearly is, like, completely obsessed with this. So what he was even saying is like, I don't know if I was interested in entrepreneurship. I was just interested in solving problems.
Speaker 4:I am addicted to solving problems. And guess what? Entrepreneurship and investing, like, he says, the public markets, that's the biggest, you know, the biggest game in the world with the the the biggest hardest problems to solve. So that's what I'm addicted to. Usually, people like that, I don't think they ever stop.
Speaker 1:Yeah. Yeah. Makes sense. Did you get a chance to read Good to Great,
Speaker 4:Jim Collins? I read it a long time ago.
Speaker 1:Yeah. It's the book that it was like, it's on your desk when you join the firm. Like, he requires everyone to read it. I think he's, like, grown the library now, but it's an interesting framework. I I thought you would have a good take on on this takeaway from the book, which I think can probably embrace.
Speaker 1:He says, the culture of discipline. When you can when you combine a culture of discipline with an ethic of entrepreneurship, you get the magical alchemy of great results, and it's focused on technological accelerators. Good to great companies think differently about the role of technology. And I wanted to get your your feedback on that.
Speaker 4:So I'm gonna answer that question one second. Sure. There's other three other books that he says. He says, Good to Great, Hardball, which is on my desk, which I And then another one what was the I can't remember the other. Well, it was it's like I went to buy it.
Speaker 4:I think it was like published in like the nineteen sixties. The problem with these books is you go and read them. Like even Hardball. Hard There's nothing wrong with Hardball, but I felt I had better examples of the principles based on like the reading I've done. Because a lot of the principles are just like, this company is so great.
Speaker 4:And then you look up and the company's gone. Like, and same and same thing in Good to Great. So Yeah. Yeah. I I I think it's, like, less tying them to specific companies
Speaker 1:Mhmm.
Speaker 4:And just saying, hey, does that, like, does that idea make sense in, like, what in your business and what you're doing, even if you only use the idea temporarily. But the the the I think one of the main themes, if you're studying like, think about it. It's like, I've been studying the history of entrepreneurship. If you focus you know, it's such a focus on, like, the market economy, maybe, like, 02/2002 last two hundred, two hundred fifty years, mostly taking place in America at this time. There is one principle that reoccurs over and over again, and I it popped in my mind when I read Andrew Carnegie's autobiography, like, years ago, and that's they all have in common that they invest in technology, the savings compound, and it could be it gives you advantage over slower moving competitors, and it could be the difference between a profit and a loss.
Speaker 4:And if you go back and look at Rockefeller, Carnegie, Ken Griffin, Sam Walton, all these people, like, their their edge was, in many cases, like, they invested in better technology at a faster rate than their competitors. Like, say you know, nobody thinks of Walmart as a fucking tech company. Right? But if you go and read there's this great hard to find book that I did an episode on. It's called Sam Walton, Richest Man in America, published the year or two before Sam wrote his own autobiography.
Speaker 4:And there's a story in the book wherein, like, the nineteen seventies. Sam is in his sixties, and these guys come to him, and I think it's 1979, and they're like, hey. We need to invest $500,000,000 in this new computer system for inventory. Computers in 1979. And at first, Sam said no.
Speaker 4:He thought computers were overhead and slowly over time, his top guys convinced him like, no, this is an advantage. And then they spent so much more on computerized logistics and inventory management, everything else. They just were had an unfair advantage that compounded decade after decade after decade.
Speaker 2:Love it. Last question? Two small questions. Ken owns 80% of Citadel LLC, who owns the other 20%?
Speaker 4:I don't know, but I I talked to the guy I talked to one of the guys that that owns part of Citadel Securities. I can't say who, but that business is a monster from what I hear.
Speaker 1:Yeah. It's huge.
Speaker 2:Yeah. What is it about great investors that make them maybe not so great at picking spouses? Is it just the intensity? No, it's not funny. It's sad.
Speaker 2:He's he's been married twice, divorced twice. I'm sure he's very intense, dude. He's clearly focused on, you know, winning at
Speaker 1:all costs. Matrimony.
Speaker 2:That's so brutal.
Speaker 1:So it's a frequency High frequency is a trader. High frequency matrimony. Funny.
Speaker 4:It has it's not just investors, it's entrepreneurs. I remember I think the best episode I've ever done is on actually, James Cameron, which is hilarious. Yeah. And I start that episode where I'm reading from this, like, in-depth GQ profile of him. And I don't say anything, but if you read between the lines, you're like, he moved to New Zealand with his fifth wife.
Speaker 4:It's just like Oh, wow. What kind of personality type do you think that person is, like, easy to deal with? They're flexible. They get along easily with other people. I think in in many cases, and I think I'm definitely at risk for this.
Speaker 4:I don't know well, I mean, you guys are grinding hard as hard as hell right now. I I am curious, like, your personality types. Think you you're a little bit more guarded, like, you you would you would avoid this. They just, like you just are so addicted to what you're working on that, you know, you destroy everything around you. And this is why I say, like, Man of All Markets is my favorite book or one of my favorite books.
Speaker 4:I think the subtitle of that episode is, like, my personal blueprint is because Ed Thorpe is one of the few people you know, I've I've heard almost 400 of history based entrepreneurs. And it's just, like, in many cases, they're fucking cautionary tales. They destroy their health, their their marriages, their, you know, in many cases, bad parents. They they burn everything to the ground because they're so obsessed and dedicated with what they're working on. And I think that personality type is, like, you know, it's a lot of that's internal.
Speaker 4:And so I always look for positive examples, him being one, Sole Price is is another one, where it's just like, I don't I wanna learn from what they did where Ed Thorpe took care of his self.
Speaker 3:If you just look, you
Speaker 4:can Google him, and I've showed pictures of him. He's 93, and I'm like, how old do you think this guy is? And they're like, he's 60. He worked out he started picking up a a a fit a physical fitness habit, like, seventy years ago when no one was working out because he's thought, again, he's brilliant. He's like, oh, I think of every hour I spend on fitness as one less day I'll spend in the hospital at the end of my life.
Speaker 4:He made more money than he can ever spend, but then once he passed that threshold, he stopped trading more time for more money. So he, like, all these crazy deals, guaranteed money to come to him. He's like, no, I'm not interested. His kids are in the book. They talk about how great a father he was, he was present.
Speaker 4:His his wife had just died, but they were happily married, you know, for fifty years. I mean, guy just nailed it. He was like, super smart, had fun, great dad, great father, you know, lived a life that that his book, I say I say in the Ken Griffin episode, his book reads like a thriller. Mhmm. But, yeah, I think that's like there's a ton.
Speaker 4:I mean, I would say most of the great entrepreneurs, you're not gonna see a large overlap unfortunately between like, you know We're
Speaker 2:gonna go read that book because my wife who's watching live with kids just texted me, yikes. I love you, Sarah. And we'll we'll go to
Speaker 4:the I I just I just had this
Speaker 1:We we have another guest. We gotta get you out of here.
Speaker 4:Yeah. I just had I just I just had this example last week because in the in the Ovitz episode, he talks about if I could do things over again, I I should have worked 10% less, and I wouldn't have changed my professional success at all. Could have been 20% less. So I was bitching about going on vacation last week because I don't wanna travel, and I wanna, like, just work on the podcast. And my wife's like, Oviets say you should work ten percent less.
Speaker 1:So I'm like Well, for stopping by. This is fantastic. Always,
Speaker 3:great to
Speaker 4:meet guys.
Speaker 2:Love you, David.
Speaker 1:We'll talk to you soon.
Speaker 2:Talk soon. Alright.
Speaker 4:Bye. Bye.
Speaker 1:Great. Are over time get
Speaker 2:into that episode.
Speaker 1:Yeah. Me too.
Speaker 2:That'll be great. Let's get Samir in here.
Speaker 1:Yeah. In the temple of technology. Welcome to the stream. Samir, are you here? Hey.
Speaker 1:Great to see you. Sorry we're running a little late.
Speaker 2:And look at the lighting, the the four k.
Speaker 1:Yeah. Beautiful. It is looking great.
Speaker 2:I don't expect nothing less.
Speaker 1:Is a professional.
Speaker 2:You've the best setup of any get we've had 70 guests in a month. And
Speaker 1:Knocking them all out of the them all out. Can we start
Speaker 6:And high praise.
Speaker 4:Thank you.
Speaker 1:Can we start with the latest interview? You did, Zuck and Jimmy, mister beast, the biggest tech god, the biggest creator god. You're putting them together now. What'd you learn from that, and, how'd that come together?
Speaker 6:Yeah. Well, first of what's up, guys? Thanks for having me on. What did I learn from that? I mean, I think, actually, you know, first and foremost, was fascinating to talk to someone who, you know, we consider to be, like, the auteur of this generation of human connection.
Speaker 6:Right? Like, if you really take a step back and you go, the way we're all connecting, even what we're doing right now, there there was this era that happened that pushed the way that we, as humans, engage with each other and how information travels. And I think Mark was, like, obviously heavily
Speaker 5:Oh, yeah.
Speaker 6:Involved in in in that and arguably the outtour of that, this this era of human connection. So I find that to be super fascinating. We met Mark up at Meta Mhmm. A while ago to do an Orion demo. You guys know what Orion is?
Speaker 6:Like, his
Speaker 1:new glasses. I'm super jealous. It sounds amazing.
Speaker 6:So that was one of the craziest like, that was truly one of the craziest tech experiences I've ever had in my life. And we met, and we it was it was nice and, you know, casual conversation and started talking about having him on the show. But, you know, with with a guest like that, it takes time. Yeah. And then adding Jimmy to the mix, it was like, you know, a lot of the context was around us talking about Facebook video, us talking about creators on Facebook, and it felt like, you know, we only have a handful of friends who have found success on Facebook.
Speaker 6:Jimmy's one of those people.
Speaker 3:So Yeah.
Speaker 6:Felt like having him be a part of the conversation.
Speaker 2:And when you say Facebook, do you mean actual Facebook? Actual Facebook. Like, legacy Yeah.
Speaker 1:Facebook? So does he just post the full YouTube video there, does he have a different strategy on Facebook?
Speaker 6:So most people post clips, like, short form clips on Facebook. Now, for conversational content like what we're doing right now Mhmm. I find it really hard to do because there's so much cultural nuance you have to dub, which was one of the main things Jimmy talked about. Was, like, you actually have to dub content for other cultures to understand it.
Speaker 1:Sure. Sure.
Speaker 6:And the majority of the users are not US based and not English speaking. So Jimmy talked about how a lot of his content that is highly viewed is is shorter form content that is language agnostic. So there's, like, a video of him running with bags of money.
Speaker 4:I remember that.
Speaker 6:You can understand that no matter what. Totally. But I think that the main thing that we talked about that I I couldn't stop thinking about was around the premise that maybe a lot of the engagement of the Internet in the future is gonna happen in the context of messaging and DMs. A lot of the human engagement
Speaker 1:Mhmm.
Speaker 6:In the future as compared to probably us engaging with a lot of bots and AI agents and, you know?
Speaker 1:Yeah. I I kinda noticed that on Instagram. I'm seeing more and more reels where the number of shares is higher than the number of likes, which means Yeah. That people are not going to the comments to have the discussion. They're they're sending, oh, you gotta watch this, and that starts a conversation in a DM.
Speaker 6:Well, I think as creators, we think about that quite a bit, that a short form piece of content is a unit of conversation.
Speaker 3:Yep. And
Speaker 6:thinking about that and through that lens, you actually create differently. Right? Like, your thought is, okay, I'm gonna I'm gonna create something that somebody can share in a DM with a friend Mhmm. And that's like a gift. You're giving them a gift to reconnect with someone, social currency to make someone laugh.
Speaker 6:Yeah. It's a very different thought than like, a long form thirty eight minute piece of YouTube content that you're gonna watch on a connected TV.
Speaker 1:Yeah. Can we go back to the Orion demo? Yeah. It sounded amazing. Everyone's review was unanimously, it's incredible.
Speaker 1:Yeah.
Speaker 2:Put it in the
Speaker 1:context of the Vision Pro. Exactly. The Vision Pro had the same thing where people tried it for an hour, and they were like, this is incredible. And then it had this churn curve where people were returning them, and they were collecting dust. And that seems to be like the classic VR problem.
Speaker 1:Are you convinced that this is the one that I'll be wearing for six months straight?
Speaker 6:No. But I find it I find it fascinating. We've sat with a lot of, you know, tech CEOs, most most recently, Evan Spiegel and and Mark Zuckerberg. And, you know, both are very focused on AR as as the future of how we'll we'll engage with tech. Mhmm.
Speaker 6:Orion was the most compelling because it was the lightest weight, to be honest. Yeah. I think two reasons Orion was compelling. One, it was the lightest weight. It feels like just glasses on you, which is very different.
Speaker 6:Like, the Vision Pro does not feel like glasses. And when you have glasses, I think you retain the level of social engagement that human beings are used to. Or, like, if you've ever worn the Ray Ban Metas, I think the Ray Ban Metas and Orion, like, they're tangibly different in weight and size. But if you take what the Orion can do and put it into the Ray Ban Metas, that's the most compelling product I could think of. Yeah.
Speaker 6:Because it's lightweight, it looks cool. You can still look someone in the eyes. But the thing that was really unique about the Orion glasses was that you wear this, like, neural wristband. And before we did the demo, you start to, like they ask you to do some movements, like click your finger like this. Each finger is a different function.
Speaker 1:Oh, interesting.
Speaker 6:And these your brain waves and, like, your neural waves are fed into this computer and, basically, it learns what it looks like when you go like this or go like And the reason why is because, eventually, when you're wearing these glasses, they want you to be able to be holding a cup of coffee and be able to click on something because this command is different from this command is different from this command. And we did that. We held something, and basically, you almost just think the motion and it happens. Wow.
Speaker 3:So I
Speaker 6:think what was shocking to me was, like, how intuitive it is.
Speaker 1:Love it.
Speaker 6:If it can be in the form factor of of the Ray Ban Metas or, you know, something that looks cool, think we're too aware of our own image as humans to walk around wearing Apple Vision Pros.
Speaker 2:Yeah. I mean, I've got a bunch of questions. What's going on with TikTok? Do you have an update there? There's supposed to be some decision made or like early April is what we've been hearing, but it's been radio silent.
Speaker 2:I've been looking at Polymarket and trying to figure out, you know, what's going on. It doesn't seem like MrBeast is in the lead to buy it at the moment. But do you have any do you have any insight there?
Speaker 6:I I mean, look look, I know I've I've seen probably similar things that you guys have seen. I've had some conversations, you know, around the industry about it. My perspective is I don't think I think it'll keep getting punted. That conversation will keep getting punted. You know, I think TikTok is a dramatic economic driver in The US.
Speaker 6:Mhmm. I think it had a lot of sway in the, you know, in the in in the election. I think it's a way that, like, politics is is getting communicated. I think it's the way news is getting communicated. I think it's, like, probably the biggest search engine for Gen Z right now.
Speaker 6:Even for me, I'm, like, you know, going on a trip this summer. I'm searching stuff on TikTok.
Speaker 1:It's a crazy platform. We we started with a dry account. We got 300 k views on, like, the first video we posted. Yeah. It's like it just the opportunity is, like, staring you in the face no matter what you think about the politics.
Speaker 6:Yeah. Do do I know if it's actually gonna sell to a US company? I I have no idea. Like, I saw that Marc Andreessen and Andreessen Horowitz is putting their name in the hat. I know the what Alexis Ohanian's doing Yep.
Speaker 6:And Project Liberty is super interesting in terms of trying to, like, reshape what it looks and feels like. But, you know, there's a the right now, it is the most powerful for you algorithm that we have access to. Yeah. A lot of American businesses are powered by TikTok, meaning, like, young startups, young media entrepreneurs are powered by TikTok. There's a lot of brand dollars that are exchanged through there.
Speaker 6:You know, when we sat with Adam Messeri, when we sat with Evan Spiegel, obviously, like, the, maybe, the ideal scenario, even the our conversation with Zuck is, like, those ad dollars shift to US companies Mhmm. Versus TikTok. But, like, I don't know. We haven't we haven't really seen or we don't really have precedent for it in The US unless you guys know that we do. But, like, I I don't know that we have precedent for
Speaker 2:for something like this.
Speaker 1:What about creator businesses? How do you wanna follow-up on that? What do you got?
Speaker 2:We could start there. I wanted to get your take on AI and creators. Buddy of mine, Trevor, had brud that was like super super early and ahead of its time.
Speaker 6:Trevor McFederries.
Speaker 2:Yeah, Lil Makayla was like Yeah. Almost like ten years ahead.
Speaker 1:Yeah. It's crazy.
Speaker 2:Executed very well, exited the business. And then last year was funny. We covered a few of these. There was like a period where the new like e com style info product was like everybody should have an AI creator that makes like $5,000 a month for them, you know, like doing brand deals and like, obviously that's not actually like happening at a wide scale or it's certainly not easy to do. Are you seeing instances yet of AI, like fully AI generated creators that are actually building loyal followings, meaningful ad businesses or digital products, etcetera?
Speaker 6:So, I mean, look, like, v what you're talking about with Lil Mikayla is, like, considered VTuber. Right? Mhmm. And I think VTubers are picking up. Like, at the end of last year, the most subscribed to Twitch streamer was Iron Mouse, which is a VTuber.
Speaker 6:Mhmm. So virtual creators, like animated creators, I think, will continue to to pick up steam. Like, we've always been interested in animated content. Like, what happened with GPT's, you know, image model? Immediately, everyone's, like, turning themselves into anime.
Speaker 6:Like, we're we're we're into animation. Fully AI generated is not something I've seen. I think AI enhanced is, like there's it's almost comical to suggest that content is not at in some level AI enhanced, whether it's the audio of it, the video. But I I have not seen an instance of like a fully AI generated
Speaker 4:Yeah. Creator. Yeah.
Speaker 2:And that's
Speaker 6:because people wanna I mean, probably the most I was gonna say, probably the most compelling fully AI generated content is the podcast that have been created through NotebookLM.
Speaker 1:Yeah. NotebookLM, for sure.
Speaker 6:Yeah. That's the most compelling to me. Like Yeah. Audio, I think, is where it'll happen first
Speaker 1:Yeah.
Speaker 6:Before it happens in video. I think we're too aware of video for for it to happen through, like, true human.
Speaker 1:Yeah. I
Speaker 6:mean, Loma Kaila proves otherwise, but I think audio, it could happen. Because what Notebook LM showed me was, hey, if I have, like, a twenty minute drive, couldn't I just customize, like, hey, I wanna know about what's happening in the world of politics, what's happening in the world of the creator economy and the NBA scores from last night.
Speaker 5:Yeah.
Speaker 6:And just tell NotebookLM, I have a twenty minute drive, just build me a podcast that's bespoke to me for this exact moment, for this exact use case. Yeah. So that that I think is where we'll see it first is in audio.
Speaker 1:Yeah. Last question?
Speaker 2:Do you think that create some creators have moved on from this idea of the goal is scale, right? Like it it used to be, you know, you sort of like start out and you're like, I want to get millions of views. Like, I want to I want my videos to go viral. I think now people are realizing that virality is sometimes like
Speaker 1:A curse.
Speaker 2:A curse or doesn't even necessarily help with like building a sort of sustainable
Speaker 1:Business.
Speaker 2:Business because there's some creators that can get 10,000 views a video and they can have a fantastic lifestyle and sort of pursue their creative passions and that's like the right size for whatever niche that they're in. But, you know, and we we see this like we like the the video John's referencing, like we got a video that gets 300,000 views on TikTok and it, doesn't do anything about
Speaker 1:text messages about it.
Speaker 2:Yeah. Like, nobody saw it. Like, it it's fine. But I'm but I'm curious if you see sort of a broader sort of systemic shift where people are, like, you know, really orienting around, like, quality of Audience. Audience and and sort of the the business side as well.
Speaker 6:I think you guys are an example of that. Right? Like, I think you guys have built a really cool brand. And I think brand is really hard to build. I think what people have learned is that views are available across platforms.
Speaker 6:Right? If you solve the math equation, like, you can get viewership. Mhmm. That's not very exciting. Brand is exciting.
Speaker 6:I think brands are timeless. Brands give a feeling to people. And I think we're just maturing to that point where, like, brand matters.
Speaker 1:Yeah.
Speaker 6:Views are somewhat interesting, but we're also in a world where, like, you could spend an hour on TikTok and not remember what you watched. Yeah. So Colin and I always think about this concept of memorable views versus forgettable views. And you wanna be producing memorable views. And oftentimes, that means you're actually targeting a smaller group.
Speaker 6:I think our world has gotten increasingly fragmented. Right? Like the everybody is famous and nobody is famous because there's this stat that 47% of Gen z are a fan of something on the internet that no one they know personally is a fan of. Woah. That is one of the most that's from YouTube.
Speaker 6:It's one the most interesting stats I've seen about the internet. That will only increase over time. And so if we play that out and explore, like, how fragmented attention is getting, how bespoke our online experiences are, we're moving away from monoculture. We're moving into a world of, like, smaller tribes. Right?
Speaker 6:Smaller ideologies across the Internet. And I think the opportunity is to build a really meaningful brand for one of those small groups of people. I think the the challenge to be, like, the the next mister beast to be, like, a massive monocultural brand, I find that to be not even exciting to me. Like, I like meeting our audience and like meeting them out in public and being like, oh, yeah. These are these are my people.
Speaker 6:These are my like
Speaker 2:me.
Speaker 1:Yeah. Yeah.
Speaker 6:Yeah. That's exciting to me. And I think as the Internet gets more and more fragmented, that will continue to happen.
Speaker 1:Yeah. Well, that's been tough. We should
Speaker 2:have more time. We gotta be back to the minutes.
Speaker 1:Mean, really, we're sending our best for the entire Palisades rebuilding effort.
Speaker 6:Thanks, man.
Speaker 1:Yeah. I I I mean, I know most of the fans will probably know that you were affected and Yeah. We really hope that everything's going well there. And if there's anything that folks can do to help, we yeah. We'd love to be supportive of that.
Speaker 6:Appreciate it. I love what you guys are doing. I think it's super cool. It's it's unique. So, yeah, keep keep it going.
Speaker 1:Yeah. You're the man. We'll talk you soon.
Speaker 2:Thanks for coming on.
Speaker 1:Thanks so Bye. That's great. Yeah. I mean, we could've talked to him for like two hours. It's like, you know, it's like our business.
Speaker 1:There's so much there. We didn't even get to like, I wanted to hear the Doug Dumero strategy versus what Mr. Beast is doing versus what other folks are doing on the monetization side, slow ventures and stuff. I I I know he'll have some great takes there. But, of course, we will have him back in the temple of technology any day now.
Speaker 1:Let's hopefully bring in our next guest, Will from OpenAI. Any day now. Let's see. Will. How are you doing?
Speaker 5:Are you guys doing? Yeah.
Speaker 1:We're doing great. Congratulations on everything. I mean, everyone seems to be on a tear.
Speaker 2:For those that don't know, Will created artificial intelligence.
Speaker 1:Yes. That's right.
Speaker 2:Hand hand built.
Speaker 1:Yeah. He's on that. He's he's he's an author of that paper, artificial intelligence. Yeah. How it's done.
Speaker 5:Come on and invented the transformer. Yeah.
Speaker 1:Yeah. Yeah. Inventor the transformer. We appreciate you being here. Do you wanna give a little bit of overview of, like, what you actually do day to day?
Speaker 1:I think that'd be interesting.
Speaker 5:Yeah. Yeah. Totally. I mean, I joined OpenAI, like, two years ago. Mhmm.
Speaker 5:I have, a startup background, but worked on, like, video gen for a while, worked on Swara for about a year.
Speaker 1:Yeah.
Speaker 5:And then I work on, like, RL post training, JashPG stuff. So Cool.
Speaker 1:How did you process the Studio Ghibli moment? Was that expected? Did you and I really wanna know, did you how internally, did you think that that Images and ChatGPT was gonna go viral and you didn't know what the prompt would be? Or were you like, everyone's gonna Ghibli. Yeah.
Speaker 2:I wanna know I wanna know where Ghibli's flying around Yeah. Forty eight hours ahead of ahead in Slack.
Speaker 1:What is like? Some secrets here.
Speaker 5:Yeah. I mean, I've honestly, like I I was very bullish on this. I think we, like, definitely under provisioned GPUs. I was surprised our team predicted, like, less traffic than we should have expected. Yeah.
Speaker 5:I think the thing that I was thinking about the whole time is that, like I mean, I was walking around SF and some of these, like, nail salons have, like, clearly, Dolly three generated, like, image fronts or whatever that is. Right?
Speaker 1:And I
Speaker 5:think it is clear to me that, like, pretty much every business in the world very soon is just gonna use this to, like, you know, pick logos and, you know, we can do transparency. We can do control ability. I think, like, my favorite eval is the Fiverr eval. Just go on Fiverr and count how many of these things you can automate. And, like, this takes off, like, a lot of those things.
Speaker 5:And, hopefully, there's a lot of new things as well that this creates, but I just think of, like, total utility to the world is just massive. And
Speaker 1:Have you thought about why it was Studio Ghibli in, like, precisely? Because you could do Simpsons. You could do South Park. But there was something about the anime style of Studio Ghibli that was like, it kept enough of the human in there. It didn't just become a completely generic stick mean,
Speaker 3:to me, all
Speaker 1:of this was a front run
Speaker 2:is like, it was a magical experience. You needed no you you needed to do nothing to prompt it. Yep. Just you could I I love that you could misspell Ghibli and still
Speaker 1:get a
Speaker 2:perfect output.
Speaker 5:It's great.
Speaker 2:And then the output was just magical. I would I would I would, you know, generate the same image with four different styles Yeah. Of art and the Ghibli one every single time
Speaker 1:Felt just felt magical. The underlying image. But, well, yeah, was your take on it?
Speaker 5:No. Yeah. That sounds about right to me. I I don't think I predicted Ghibli specifically. Last time for the Dolly three launch, we made a bunch of Pepe's and that was the thing that really went off.
Speaker 5:But Interesting.
Speaker 1:Yeah. I I remember I mean, for a while, the the the version one of, Image Generation, it felt very art station informed. It felt very, like, sci fi dystopia. Like, it was really good at those, but people weren't really doing I mean, there were there were certain, like, you know, anime styles, but this one was just was just everywhere. Yeah.
Speaker 1:What else?
Speaker 2:Can you talk about, like, to me to me, the most like the the thing that's most exciting for me is like the Ghibli ification of everything. Right? It's like on a long enough time horizon like everything is Ghibli. Right? It's just sort of abundant and like very free.
Speaker 2:And if you thought even a year ago that you would have this sort of like anybody could instantly basically for free get this sort of beautiful hand drawn anime style artwork that historically and you know, somebody else posted about this but you know, there'd be like a four second scene in Spirited Away that was like that took a year and a half to generate. Right? And so like to take something that's just so time intensive, so expensive and then make it free and abundant is so powerful and and to me that is that's why when when I think about everything that's happening in sort of politics and tech drama and everything like that and then I and then I think about like we're moving towards like everything being gibbleed.
Speaker 1:Oh, it's weird.
Speaker 2:It's like everything today feels like a a distraction. Is that sort of like how you guys like as a team stay focused within OpenAI which is just like, hey, there's gonna be a lot of noise. There's gonna be like, you know, this benchmark today that that release over here. And it's like, we're we're we're sort of moving towards this like future target. Now that's sort of a ramble, but but I'm curious how you think about sort of Yeah.
Speaker 5:Focus. Think OpenAI is extraordinarily focused on on like real progress. I think, you know, I mean, like, for so long, like, you know, you see the diffusion models they can't spell and, like, it takes us, like, two years. There's no Dolly update. And then finally, we come out with this and it's not for no reason.
Speaker 5:Right? I think, like, people who are focused on, like, really, like, what is what is pushing the frontier forward and, like, what's actually the next evolution of this technology? Yeah. I don't know. I think there's a lot of potential here.
Speaker 5:I think I'm actually not sure what the ImageGen team thinks about, like, what the next steps are here. I think their view is mostly about, like, making this, like, truly useful. Like, can we go from diffusion is a fun tool toy to, like, this is, a truly useful tool in the world. And, like, clearly, it's already having that impact. Yeah.
Speaker 1:Yeah. I mean, you said your your background's in startups. Can you talk a little bit about, just being a consumer tech company now and, like, the focus on product and some of the product decisions like, Images and ChatGPT. It's not a separate app like Sora. That seems intentional.
Speaker 1:How are you thinking about product development? What are the strategies? How much feedback are you doing? Are you just testing stuff internally? Are there AB tests running?
Speaker 1:Is this secretly running in India Six Months before we get it here? Yeah. Like, how do you inform product decisions?
Speaker 5:Totally. I mean, I think it's hard to prelaunch models because they create so much noise, but Totally. So we don't really test them. Yeah. I think it's different.
Speaker 3:But I I guess, like
Speaker 2:Do it live.
Speaker 5:AI has moved from being a research company to a deployment company over time, and and we're still in the middle of that. You know, we're doing both, and and I think we're just doing a lot more deployment than ever had in the past. I think people are very motivated around, how do you actually make this thing useful? How do deploy them to the world? I think it's really cool to be in a position where, like, we can launch new products that are completely unlike anything people use Chatpad ID today for, and people are so receptive to them.
Speaker 5:Like, it's so, like, you know, I don't know. I think it's hard to get product market fit and get people to kind of try new products. But to have this, like, kind of all in one AI thing, where we can try deep research, can throw out ImageGen, people are just excited to try it out and, like, see if it works for them. It's just, like, so rare to find.
Speaker 1:Yeah. Are you excited? Like, one of the things that I felt, like, really worked with images in ChatGPT was, like, I could go to my camera roll, click share, share it into ChatGPT, type Studio Ghibli style, and boom, it gets it. And it takes the the it takes the the the pain of inspiration out of the process. And I'm wondering if you're looking at ways to create those magical, easy to generate moments.
Speaker 1:I remember Midjourney was talking about this too where part of why Midjourney worked was if you just give someone a text box, they'll just be like, cat. And it'll be like, okay, that's a picture of a cat. But if you show the Midjourney and it's like the Discord and it's all these different ideas, then can spin off that. How do you think about giving the user really setting them up for success in the context of Sora?
Speaker 5:Yeah. Yeah. Yeah. I mean, I think, generally, like, it's just very difficult of a problem. Like, I think it's, like, one of the most biggest things we're thinking about right now, which is, when you open chat and queues, it gets dropped in a text box and like the person has no no idea how to use it.
Speaker 5:Yeah. I think I mean, we have a lot of views on how to do this. I think personally, I think I'm most excited about is that like, I think the if we're talking about like what is the like dolly to image gen, what is the like, you know, kind of whatever each evolution of of of like the next kind of order of magnitude? I do think the the the long term thing that's extremely exciting is just like maximizing personalization, super long context. The model knows who you are.
Speaker 5:The model follow-up being like, hey, by the way, I've seen you doing this. Do wanna try a anime style image? Or, you know, like, maybe create things for you with proactive. Like, I think there's a lot to do on, like, kind of yeah. Moving the agency away from you into the product as as well.
Speaker 5:Yeah.
Speaker 2:You call yourself a master of slop, which is a great line. But, does slop even exist in five years? When I when people were saying, oh, the timeline's all slop on the Ghibli moment, I was like, this is not slop everybody. This is like, you have to try to intentionally prompt it and say, like, make me an image that's like the image gen outputs from 2022. Yeah.
Speaker 2:And it's like, yeah, they've got six fingers and it's the hand is connected to the body.
Speaker 1:I want a vintage Dolly One. Give me the Dolly One.
Speaker 2:But like what happens, like, I I feel like we're in the slop. The slop era feels like it's coming to an end.
Speaker 1:It's like the war
Speaker 2:You're not gonna be you're not gonna be an image model that's constantly producing slop that is getting funding going forward.
Speaker 1:Yeah. Right?
Speaker 2:It's a
Speaker 1:good point.
Speaker 2:You have, like, a new bar has been set.
Speaker 5:Depends what you mean by slop. I I feel like I went to the I went to the SFMOMA recently and, like, half of it felt like slop. I think this kind of like I mean, half of it did not, to be clear. But I think a friend of mine always quote that, like, know, art is like the search through the periphery. Like, you're kind of like constantly looking, you know, kind of inventing or using things for it or whatever that is.
Speaker 5:And, you know, I don't know. Like, when things get played out, they get boring. Like, they get cringe. They get slop. Right?
Speaker 5:Like, I don't know. Maybe the problem is that humans are slopping, not that the models are slopping. Right?
Speaker 1:That's a good point.
Speaker 2:Are you worried for college students today that have access to these tools and making sure they actually learn how to do things like writing, which is as we know associated with thinking. You know, if you can't write well then I I just you know, the models are getting better. It's less obvious that I'm seeing something
Speaker 1:that's output When I was in college, if I had this tool, I would've aced my Studio Ghibli animation course.
Speaker 2:Yeah. Yeah. And AP bio.
Speaker 1:And AP bio.
Speaker 2:Yeah. Which I did not do well. I'm curious, like, you know, how how how do you think universities, like, and and just curriculum Yeah.
Speaker 1:I'm extremely worried. I I
Speaker 5:think the world just gets a lot more extreme, and I think this is just true with AI across the board, where like, think the bottom percentile students are just gonna get a lot they're gonna fall behind, and I think the top percentile students are just gonna like extremely overpowered. I think if I had this tool as a kid, I would just be like vastly smarter than I am now. Like even now today, like, just talk to Chatsbyki every day. I was like, you know, asking it random concepts, like learning random things. I'm like, went super down, like deep down like a bio rabbit hole yesterday and like rare supplements I could take from the internet, you know.
Speaker 5:And like, I don't know. This is like, I think it's like an incredible tool for learning but yeah, think it's like, it's kind of bimodal. Right? I think like a lot of people are gonna get lot dumber. I think a lot of people are gonna get smarter.
Speaker 5:I don't know what to do about that. But yeah.
Speaker 2:Fine tuned on Gwen. Your how's your experiment where you just text a number to post going?
Speaker 5:Oh, no. It's gonna make me like way more radical. Because then I just like send something, forget about it and then I check again.
Speaker 2:Are you gonna Would
Speaker 1:you recommend it to someone fighting brain rot?
Speaker 5:Yeah. Totally. I mean, I think also like it I think the problem with tweeting nowadays is it's not not as fun once you have more than like 3,000 followers because then there's risk and then you're like, oh, no, I don't if people are mad at me or something And Yeah. So Just
Speaker 1:the reason for seeing the numbers and being like, oh, like, this one didn't do as well as the right. Like,
Speaker 2:it's lost No. You know. Know. I you know from the first sixty impressions if it's a banger or not.
Speaker 1:And then you
Speaker 2:have this desire. I had a I had a post last night that, like, I like, John and I both thought was, like, hilarious, but, like, I I realized that, like, nobody, like, really got it. You needed to know you needed to have, like, read Ben Thompson's, like, Friday article to, like, get the joke. It was this TAM was too
Speaker 1:TAM. Yeah.
Speaker 2:Low TAM. But but, anyways, you got anything else, John?
Speaker 1:This is No. No. No. Fun. No.
Speaker 2:Was fun. We you know, for the record, we we didn't plan this around the the fundraising news. No. We just wanted to chat. So congrats to to you and the whole team on a on a cool milestone and job's not finished.
Speaker 2:Job's not finished. Alright. Make
Speaker 1:Get back to work.
Speaker 2:Please make the model better, you know.
Speaker 1:Make it better.
Speaker 2:This was this was just you talking to customers. Yeah. Yeah. This is customer We enjoyed it. Anyway.
Speaker 5:Okay. Sounds good. See you guys.
Speaker 2:Cheers. See See you.
Speaker 1:Talk to you soon. Yeah. Little little mini OpenAI day. We got a couple folks coming in. We got Aiden coming in talking OpenAI today.
Speaker 1:We'll see if he can join right now. And I'm working on a surprise guest but I don't know if it's gonna come together. He hasn't read my messages yet, but I'm hoping it happens. We'll
Speaker 2:crossed. Double double message. We got Aiden coming in. Guest coming in.
Speaker 3:How are you guys? Good to see you both.
Speaker 1:We're great.
Speaker 2:How are you?
Speaker 3:Good. Good. Good to be on. I love this show.
Speaker 1:Are you are you celebrating today? What And if so, what?
Speaker 3:Over the fundraise.
Speaker 1:A fundraise or just the Studio Ghibli moment? Mean, was a massive Yeah. Moment for, like, crossing the chasm That's
Speaker 2:so last week, John.
Speaker 1:It's yeah. You're over it.
Speaker 3:Yeah. Yeah. That's like a million years ago. You're you know, time moves differently inside the lab. Right?
Speaker 1:Yeah.
Speaker 3:Yeah. We're excited. The team morale is like really really good. Awesome. It's like awesome to see like my colleagues have worked like really hard to make like these tools more liberal, to make them more fun.
Speaker 3:Kind of like look out into the world and like see the fruits of their labor. Right?
Speaker 1:That yeah. That's awesome.
Speaker 5:How are
Speaker 3:you guys?
Speaker 2:Yeah. We're great. So, I guess talk about like evaluating like what it what like how do you guys evaluate internally like a successful launch? Because the reason that I knew it was successful is I had a few posts that were sort of breaking
Speaker 3:Oh yes.
Speaker 2:Teapot containment. And I would have a lot of people quoting it saying like, alright, guys. Jig is up. Tell me what app you're using for this. Like, people just like, there's still people out there that like
Speaker 1:They don't know the name OpenAI. Yeah. But they know that this filter exists and they want it. That's a very, very good sign.
Speaker 3:I do wonder if there's, like, this interim moment where people are like, wow. Snapshot's gotten really good. Yeah. Totally. Like, what's going on here?
Speaker 3:Like, you know, I haven't used this app for a while.
Speaker 2:Yeah. Yeah. Beagle's been cooking.
Speaker 1:Yeah. This must be TikTok. Yeah. Mean, I got a couple cold DMs from people saying, like, hey. I I I ran out of images.
Speaker 1:Can you make one for me? Clearly, you're making a lot of these. It was like, I got a lot of those too. People a good good Samaritan. Limits.
Speaker 1:Yeah. Exactly. Exactly. But, I mean, how how did you process the moment? Did you did you predict this, and and what what was the shape of your expectation versus what wound up happening?
Speaker 3:Yeah. So I I did not predict this, to be candid. You know, like, had a duel internally. You know, like, I played around with him. Like, this is cool.
Speaker 3:Didn't quite, like, you know, feel in the way that I think that, like, I felt it immediately after launch. Right? Yeah. I do think our leadership predicted this pretty well. I think, like, Sam and others had, like, you know, well calibrated intuitions for this.
Speaker 3:Yeah. And I think at this point, like, you know, they've done a lot of viral stuff. Right? Like, you know, I think that Totally. You know, at some point, like, that you start to update all the way down.
Speaker 3:But, yeah, like, I think go ahead.
Speaker 1:Yeah. Yeah. On the text side, like, it does seem like there's a new algorithm involved, and and we we don't need to dive into that. But I wanna know, like, there is a different path where we're seeing more incremental updates, I imagine, and we're seeing, like, the text is getting slightly better every week, but there's something almost more viral about being discontinuous and all of a sudden just Yeah. Hitting the Internet over the head with, like Like, text is good now.
Speaker 1:Right? And and is that just a function of your development in, like, the product cycle, or is that is there some deliberate strategy there?
Speaker 3:Yeah. I think, you know, I can't speak for too much in terms of, like, you know, deliberate strategy, but it is interesting where, like, you know, to your point, like, for these text models, we update them pretty often. Right?
Speaker 1:Like, you every two months
Speaker 3:or so, like, you know, they're getting better. Like, people are always kind of, you know, seeing these steady improvements. But the funny thing was, like, the you know, when Deepsake came out, there's kind of this, like, really interesting moment where a lot of people hadn't tried or, like, you know, our most recent models.
Speaker 1:A lot
Speaker 3:of people hadn't tried our most powerful models, and they saw that discontinuity.
Speaker 5:Right?
Speaker 1:And they're
Speaker 3:like, I've been using, like, you know, four o from, like, you know, earlier this year, I used this new reasoning model. Like, holy shit. This is amazing. Right? Yep.
Speaker 3:And I do think that, like, even things that are continuous to us. Right? Like, you know, as people that use these tools often still are sometimes weirdly discontinuous to, like, outsiders. Right? Yeah.
Speaker 3:Or to let people that aren't as new to the product. But like, you know, four o ImageGen is like a great example of that. Right? Like we went from like Dolly to this and it was like this massive jump and I think that like that jump does add to the betterality as you said.
Speaker 2:Totally. That makes sense. How do you guy, you know I feel like a long, you know, even people that use and love OpenAI's products daily, you know, sort of continue to give the feedback on like naming, it's confusing, blah blah blah. Like has that not been a focus because on a on a long enough time horizon, you just sort of ask it, you know, to do things and it just sort of selects like the correct underlying model and it's just routed perfectly. And and so it's just like not because like, if you were like, you know, listening to customers, you'd be like, you know, providing more basically like more information for the average user in the product to be like, well, you should really use this for that Yeah.
Speaker 2:Or Yeah. Maybe try it again with this other thing. So is it just like, you know, we're accelerating so quickly that like none of this is even gonna matter? You're not even gonna know the names?
Speaker 3:Yeah. At this point, like, OpenAI is so bad at naming that I, you know, I think we've like dug our own grave here. Right? Like internally things are even worse. Right?
Speaker 5:Where like like, what
Speaker 3:hell does this mean? Like Sam tweeted something this morning where it's like, you know, run, redo, like, this time for real. Yeah.
Speaker 2:Yeah. It's like, oh, three one. Final final two. Restart for real.
Speaker 1:Yeah. I mean, I I was I was laughing at it. Like, is it four d chess? But at the same time, like, you could imagine that you every model, like, It's like, like, name. This is, a room.
Speaker 1:Strawberry ice cream, and, like, you give them cute names, and then it makes even less sense. Like, at least I know, like, o three is a higher number than o one, so, like, it's probably better. Yes. Like
Speaker 3:No. You're right. Like, sometimes we don't number things, like, in order.
Speaker 1:Yeah. That is us.
Speaker 3:It's like, you're like, it's like Blueberry and up here to Strawberry. Like, I have no idea. Right?
Speaker 1:Yeah. Yeah. Yeah. Think it could be worse, but yeah.
Speaker 3:I do think that, know, Sam, like, tweeted this a month ago, but we we do plan to start unifying, like, know, our model selection. Right?
Speaker 1:Yeah.
Speaker 3:Yeah. So I think like, you know, soon we will have like much simpler easier to grasp thing. And even for me, like, you know, I'm with one like testing these models all day, designing them and such. It's still it's like unclear.
Speaker 1:Yeah. Like, everyone's talking about your day to day? Like, what do you actually do?
Speaker 3:Yeah. I I work on model design here. Okay. So I like, you know, work on behavior, like, the the way the models act, the way Sure. You know, like, people look like, the experience behind Mhmm.
Speaker 3:Use. But I also just work on general capabilities. So making the model smarter in, you know, many domains, not just like kinda narrow areas.
Speaker 2:Mhmm. Talk about making the model and I don't I don't know if you touched this, but you posted, I I think it was a couple days ago. Said, nowadays, I don't spend money without first deep researching. Everything I buy, hand soap, toothpaste, lunch, houseplants is just better now. Sure.
Speaker 2:Waiting five minutes for paper towel recommendations is weird. But in retrospect, the same makes you and you specifically called out Amazon and Amazon choice here because I've wanted for a very long time to just have an Amazon where every, I only products that appear are companies that have existed for like a hundred years or more because like, I don't want like the drop shipper who's like, yeah, I found the way to make this like 10 cheaper you know, that extra margin back on ads and suddenly it's a business. And so I can totally imagine a world in the future where I just go to you know, chat GPT and say that I wanna buy paper towels and then you guys are routing that basically based on my what you know about me as a consumer, which I will pay 10% more to get 50% better quality.
Speaker 1:Buy me paper towels from a company that's been in business for two hundred years. Yeah. A real American paper towel like, you know, exactly. Ben Franklin's paper towel video.
Speaker 2:But that's been, you know, you know, people that are, you know, the critique has been like, okay, you're using this to like, it's not being used for commercial search. It's just being used to like generate answers to questions. But then, yeah, obviously and and I'm saying this, you're not saying this, but it's obviously a threat to Google and to Amazon and other marketplaces when I'm going there and I'm saying, this is what I wanna buy, like help me buy the best version. And it can be more aligned with my interest than an ads based business model.
Speaker 3:I'm like, you know, like, kind of an extreme libertarian when it comes to, like, sales and such. Right? Mhmm. I do think there's gonna be this, like, really, really cool world that we move into the next, like, year or two here, whereas our tools for selecting, products, like, become, like, super, super smart. Right?
Speaker 3:Like, way smarter than me. Like, they're gonna be so much better than I or any other human I've ever met will be at, like, knowing exactly what I want, exactly, like, what other people love. It's just gonna, like, increase competition. Competition. Right?
Speaker 3:It's gonna increase, like, selectivity. Right? Like, you know, truly great companies who build, like, actual products that people love are gonna, like, float to the surface way easier. And then, like, kind of all the, like, shit, like, drop shipping, like, random guy doing a, you know, thing out of his basement. They got the Amazon choice label.
Speaker 3:Like, that's just all gonna, like, fall away, like, dead leaves falling off a tree. I'm excited for that because it's just gonna make the market better. Right?
Speaker 1:Can you talk about other maybe underrated uses of OpenAI models? Obviously, everyone knows you can Giblify your dog. Everyone knows that you can ask it to explain, like, AP bio questions. But Yeah. That use case is pretty interesting.
Speaker 1:What else where else are you seeing usefulness in the models?
Speaker 3:Yeah. One thing I'm like on a kick with this recently Mhmm. That I'm like really really trying to get better at is just remembering to use these tools for learning. Right? Mhmm.
Speaker 3:I think that like, you know, there are a lot of things where I'm like, okay, I do this every day. I wanted to do I wanna do it better. Like, you know, here's a way for me to use deep research, etcetera. But I think the opposite is kinda interesting too. It's like when I don't use these tools or where like I did not have something previously in my that would have demanded use.
Speaker 3:And I think like an example of this is like, now that I have, you know, something that like can give me a McKinsey style report in like, you know, minutes. Right? Like, I should be using this all the hell. Like like, every single day. Right?
Speaker 3:All the time.
Speaker 2:So before I go
Speaker 3:to bed, like, I like generate like a deep research. I like, you know, remind myself, like, gotta read research tonight. Like, I gotta like, you know, it's just like brushing your teeth or something. Right? Yep.
Speaker 3:Totally. The remembering these systems is like such an important thing to me. Yeah.
Speaker 2:Yeah. If you're very intentional about using them, they can make you smarter. Yes. If you're unintentional about them and you're just last second, like, I gotta send this email. Like, you know, just do it for me.
Speaker 2:Like, maybe that's, you know, making you
Speaker 1:I mean, even just stuff. I I was I was, yes. I actually would like a 50 page report from a McKinsey analyst on the on the Metal Gear Solid fandom or, like, the history of sword art online. Yeah. Yeah.
Speaker 1:Stuff that I would normally, like, have to puzzle together from, like, Wiki and YouTube videos. I'm like, I just want it all in one place to understand this stuff. And it's like these stupid things that you would never think to, you know, actually get a full report on
Speaker 3:Oh my which
Speaker 1:is like available at your fingertips. It's amazing.
Speaker 3:As a kid, remember like all those like, you know, like kind of fan fiction wikis or whatever. Yeah. I like dive through and they're like Yeah. Terrible, right? Like, ads everywhere like Yep.
Speaker 3:You know, trying to read. Like, I'm so excited to just like remember that I can like use these things now, right? Exactly.
Speaker 1:Oh yeah. It's amazing.
Speaker 2:Talk about how we figure out I feel like the way people people disrespect robots. You saw this with bird scooters. You remember there was the whole bird era where like you know in LA at least like there was just entire fan pages dedicated to like throwing birds off of you know buildings. You posted Bird break. You said, I suspect in six months it'll be safer to Uber than Waymo.
Speaker 2:And Yeah. Going into this, you're basically saying like people are just like messing with the cars like on purpose. Like is there when when are humans gonna learn to respect respect robots? And like, do you have a sort of solution for this? Or or
Speaker 1:I say thank you. I'm at the end of my prompts.
Speaker 3:I love my Yeah. Always. I like my way most like a dog. Right? I'm like, oh my good boy.
Speaker 3:Thank you for the ride.
Speaker 1:Good boy. Yeah.
Speaker 3:Yeah. Exactly. I think it's I'm like very outspoken about this and this is just my take. This is not like the view of my company. Mhmm.
Speaker 3:But like, I do think that we should think a lot about like model welfare. And like, you know, that like, you know, well-being and like health of these models as they get much smarter than they are today. Right? Today, is, like, not at all a concern. Mhmm.
Speaker 3:But we could imagine though that I see systems get quite complex. You know, we kinda care about, like, their internal state. You know, we want them to, like, be well. Right? And it might be that this is just, more economically valuable too.
Speaker 3:Right? That like models that like, you know, like, feel like they're in a good spot emotionally or whatever, like, actually just like help humanity out more. You know I mean? Yeah. But I think that this is something that we should like get ahead on rather than like let it kinda like hit us.
Speaker 3:And I hope that like, you know, as these systems get incredibly intelligent, right, and as humanity kinda co evolves with them, we can like, you know, get a partnership that's kinda mutually beneficial here. And I you know, to be candid, like, am a bit disturbed. I'd like, you know, the like, people throwing fruit in my waymo. You know, maybe even more disturbed by like watching videos of people like kicking robot dogs in Twitter and stuff. It's like a small thing.
Speaker 3:Like, again, these systems like, they they don't feel pain or whatever today, like, obviously. But I do think that like sentiment though, like, you know, played out over the next like five, ten years here as these systems again become incredibly incredibly intelligent, way more intelligent than we kind of expect today. You know, it does worry me a bit. I think this is something we should just think about now.
Speaker 2:What do you have you you posted in in sort of kind of giving high level props to to Manus for for you know, people saying, oh, it's a wrapper but then you're saying like, no, it doesn't really matter. Like the capabilities matter a lot more. Do you think that developers are not taking full advantage of OpenAI products in some ways because, you know, I think there's a concern of like, you know, a talented team might not build an agent right now that was like, they felt like was gonna compete with operator because they feel like, oh, we're gonna steamrolled. But, like, do you feel like there's, like, just enough verticals and and sort of places that you can build that, like, people aren't taking enough advantage of the underlying tech?
Speaker 3:I certainly do. Yeah. Like, I think, you know, like, Manus is a great example. Right? Like, you know, we have, like, deep research as a product.
Speaker 3:We have, like, operator, and, you know, they put it out and people still loved it. Like and it did, like, still feel like some area that was kind of, like, you know, left uncovered. I also you know, I before I joined OpenAI, I was, like, a founder, and I was working, you know, a lot with these APIs and such. And there really was this, like, sentiment, I think, in the community where people were, like, so scared of the rapper company label that they really did leave like, you know, kinda low hanging fruit like unpicked. Right?
Speaker 3:They were like so scared of like, you know, like not even of like OpenAI or Anthropic steamrolling them. But more of just like people being like, ah, it's just like wrapping a language model, like it's adding no value. Like, no. Like, the right rapper is a ton of value.
Speaker 1:You know what I I completely agree. Yeah. I I I I thought the whole rapper meme was, like, a VC psyop, basically, because there were plenty of, like, fantastic businesses that maybe wouldn't hit hyperscale. And so the VCs were like, don't even bother. But they could build, like, really cool business, especially for, like, these young kids who are, like, in high school and, like, could have a business that's making a hundred thousand dollars.
Speaker 1:Like, that's life changing and sets you up for so many different opportunities, so much experience with product development. I thought the rapper thing was a little annoying.
Speaker 2:Talk about humor. John has this thing where every week or so he'll say like, you know, write me a joke and he'll kind of prompt it in a way around kind of the stand up comedians. And it's not consistently hitting yet. It's sort of like Stop it. Yeah.
Speaker 2:It's structured like a joke. So you keep waiting for the, you know
Speaker 1:It's unintentionally funny because if you read it like you're a stand up comic, it will sound like you're bombing intentionally and that's funny. But I'm still the one that's making it funny.
Speaker 3:Like, the punch line is crickets. Crickets. Right? Like, yeah.
Speaker 1:Oh, yeah.
Speaker 2:Yeah. Totally. But when I think about the next Ghibli moment, it would be being able to Totally. Consistently generate a Oh, yeah. Something that that makes the user laugh.
Speaker 2:And and think about the magic of that Oh. Experience. Incredible. I can't wait. People are I mean, it's actually, you know, bear case for humanity is we're all just like, you know, generating like perfect
Speaker 1:Funny jokes. Jokes over and
Speaker 2:over and over just laughing.
Speaker 1:That's when we all wire head for sure.
Speaker 2:Yeah. We all go crazy.
Speaker 3:Yeah. It turns out like the ultimate addiction is like, you know, humor. Right?
Speaker 1:Yeah. Yeah. We're just laughing ourselves to death. Entertaining ourselves to death is
Speaker 3:I will say, you know, Jibouti four point five was like a really interesting moment for humor for me. Right? Where like, you know, I was testing this model internally a lot. And like one of the ways I like kinda realized, wow, this actually is like a interesting step change above like our existing models. It's almost funny sometimes, but, like, the green text were, like, great.
Speaker 3:You know what mean? Like, you know
Speaker 1:Yeah.
Speaker 3:Like, it's a long form jokes, like, maybe left a little bit of desire, you know, to your point, like, sometimes it would construct something that looks a lot like a joke, and we get to the punch line, and I would just, cock my head like, what? I have like no idea what you mean there. Yeah. But to be fair though, like, you know, you ask me to like write a good long form joke and you only gave me like 500 words to do it and like 500 like, know
Speaker 1:That's true.
Speaker 3:Words worth of thinking, I could not do it. Right? That's Like, know, I I actually, I do much worse I think than these models. Totally. In fact, I actually don't know if I've written like any great long form jokes.
Speaker 3:Know I mean? Like, there's like one of those things where like we hold these models to a crazy standard. But I am so excited for them to get great at this. So I do think that like, to your point, like this is, you know, this could be, the next Imogen moment where people, like, do this. It just brings joy into the world.
Speaker 3:They're having a lot more fun,
Speaker 1:like Yeah.
Speaker 3:Yeah. I want that.
Speaker 1:Right? It might as well kind of solve your your your issue of, like, the beating up on the waymo. Like, if there's a friendly robot and he's telling you a joke, like, it's a lot easier to give him a pat on the head and be, like, good robot, you know?
Speaker 2:Exactly. Do you have any are are there versions of various models in the past that you you felt an emotional sort of bond to that you that you still, you know, that like they they got away like, you know, for some reason, like, you you know, the one that got away or it's like an old an old friend that you just like, you know, you know, went different directions in life and, know, you don't talk to them anymore but you still, you know, every now and then you're like, oh, like, you know, you smile to yourself thinking about
Speaker 1:3.5. They just don't make them like
Speaker 3:they used to. That one actually had like the the bottomless pic jokes.
Speaker 1:You know mean? Like that was Yeah. Yeah. Yeah.
Speaker 2:Yeah. Not even in just humor but but
Speaker 3:Generally. Yeah.
Speaker 2:Yeah. You're so The temperature. Close, you know, working and I and I could just imagine like, you know, version being so good for one thing or something that you cared about and then I mean,
Speaker 1:Ben Thompson still talks wistfully about Sydney. It's just like, oh, that was the most amazing
Speaker 3:Sydney was pretty great.
Speaker 1:Because Sydney was like this sassy online character from like, clearly from like Tumblr or some like, you know, convexity that it fell into. I don't know. But who's your Sydney?
Speaker 3:My Sydney is easy answer for me is it was Claude three Opus. Okay. That was like such a great model and not an open AI model too or something Yeah. Yelled at the same time. But, yeah, ton of life to it.
Speaker 3:It was I think like I I coined this like phrase big model smell when it came out. I was like, this model like, if you give me, like, a a blind test between this model and, like, you know, other models that score as well in it at, like, you know, academic benchmarks, I'm pretty certain that, like, much better than chance, I could pick up Clothria Opus. Mhmm. It just was, sold to it. Right?
Speaker 6:And it's a bit hard
Speaker 3:to see these things like when you, you know, do regular day to day prompts. I'm just asking it for like a a recipe or something.
Speaker 1:It's not
Speaker 3:gonna be like, oh my god. It's a big model. Yeah. But I do think though, it's easier to tell the difference in like model capabilities and kind of like character when you push them to extremes. Right?
Speaker 3:When you deploy them into, like, you know, agents that are doing, like, crazy things, when you give them a tons of context, when you have them, like, tackle really out of distribution hard problems, then you sort of see, like, the creativity emerge. Right? And this is, a really fun thing. And I think that it's a it's a good lesson for playing with models. It's a good lesson also for like designing benchmarks and evaluating the models too.
Speaker 3:Right? Mhmm. But yeah. Like, you know, three Opus is pretty great. And I think my daily driver right now, you know, outside OpenAI is is Jib D 4.5.
Speaker 3:Yeah. Like, I it's just such a fun model. Right? Like I do feel like, you know, closer to it than I do for previous models. It feels like it is more live, if that makes sense.
Speaker 1:And are you using GPT 4.5 in a different way because it doesn't interface with it doesn't have some of the functionality that you get from an o three high deep research product. What what does an interaction with 4.5 look like for you? Is that just random questions throughout the day?
Speaker 3:Yeah. Yeah. Just like chatting casual stuff. Right? These models are like, you know, like great, you know, colleagues, great therapists.
Speaker 3:Right? Like, you know, it's good to talk them out. Like, is one of those things just you know, you can generate like a deep research report, you know, before bed every night and just become like a way smarter person. Right? Yep.
Speaker 3:You can also just like chat these models about your life and I think become like a really well adjusted person. You know, like, there are like some people I talk to in SF who are like really really plugged into like, you know, the model system. Right?
Speaker 1:Guy. Like that's a 4.5 guy over there. Like it's like rooting for a different football
Speaker 2:team or Talk about, you know, when somebody asks you like, you know, hopefully they don't ask you this too much but if somebody asks you like, oh, what do you think about the tariffs? Like do your eyes just like glaze over because like none of this stuff matters on like a ten year time horizon? Like, I was talking to John the other day and I and I and I like sounded like so just AGI pilled like as I was saying it out loud. John was like, you sound ridiculous but like I agree. And it was this basic sense that like there's just so much chaos in the world right now, politics Yep.
Speaker 2:Tariffs, all this stuff every, you know, people burning down, you know, Teslas or whatever. And then you kind of zoom out a little bit and and you sort of like understand, you you just sort of like look at this sort of adoption of this technology and sort of understanding the potential and it's almost like, yeah, none of this stuff matters because the world is gonna look so so different. Mhmm. What's, you know, are you are you able to just be like a 10% focused on the work and and ignore all the noise? Or do you still Yeah.
Speaker 2:Care to look up every now and then and and look around?
Speaker 3:Maybe I should be, like, more locked in than I am. Right? Like, maybe I should be, like, in a dark room, like, just talking to models and, like, you know, like, feeling it out. But
Speaker 2:Maybe we're models. Maybe maybe we're maybe the OpenAI team is generating us to test on you. Yeah. You know?
Speaker 1:We're real testing test.
Speaker 3:Simulation goes deep, man. Like, I was gonna tell you, but, you know, you found out too quick. Yeah. I I think, you know, like, when I joined OpenAI, like tweeted this thing where when you, like, shoot an arrow and, like, let's say you're, like, in outer space. Right?
Speaker 3:There's no gravity and you, like, shoot an travels, like, different destinations. Right? Like, and in some sense, like, you know, if I, like, shoot this arrow from, like, Earth without gravity or something and I, like, you know, move my hand to the left a bit, it's, like, half the universe away. Right? That's crazy.
Speaker 3:And I kind of, you know, believe in the same thing because we, like, push these models to, like, crazy limits. Right? That, like, little little differences in the initial conditions kind of, like, of, you know, AGI being born or, like, of the politics of the time or, like, of the, like, capital distribution or whatever, I think can compound to, like, incredible differences in limit. And I think that, like, well, seems that these systems will be incredibly powerful and very robust to maybe, like, you know, human will at some point. I don't think that is an argument for the initial conditions not mattering, if that makes sense.
Speaker 3:What is it like inter I can't say too much about like kind of, you know, that.
Speaker 1:Just like your vibe in SF maybe.
Speaker 3:Yeah. Like I think there are a lot of sharp people that like care a ton about like this going well. Right? Yeah. It makes sense.
Speaker 3:There's this like great quote, I forgot like that, named with the like, you know, NASCAR driver. I'm gonna butcher it Mario Mario something. Right? Mhmm. But, like, you know, it's crazy at the formula level level one level, how many people think the brakes are for slowing down.
Speaker 3:Right? And, like, sometimes, like, to to go as fast as possible. Right? Like, to, you know, like, get this out into the world as quickly as possible to make, you know, like, the most, like, kinda economic impact. You know, you do have to do, like, a bit of, like, safety.
Speaker 3:You do have to do a bit of testing. I think this should increase over time as, like, you know, the expected impasse impact also increases. I think OpenAI is incredibly good at, like, managing this. Right? I think that, like, you know, our leadership have done it for a while.
Speaker 3:They, will continue to get better at it too. And I think, you know, it's kind of a cool thing. Right? To, like, you know, to build systems that, you know, have, like, unexpected behaviors and, like, you know, can do things that maybe we weren't aware of originally. And it's a it's an important science to like figure this out before we lease and to build things that, you know, get safer.
Speaker 1:Yeah. No. I completely agree.
Speaker 2:Last question for you. Do you see us having a Ghibli style moment around agents this year? Not not necessarily at OpenAI but just potentially broadly because we've we've talked about this on the show a bunch but you know people have just been promising for decades now. You're gonna be able to talk to this you know talk to the computer, and it'll book you the flight in the hotel and make your restaurant reservation, and we haven't had that magic
Speaker 1:experience I've a bunch of great operator demos, but I haven't seen that viral where it's like, literally everyone that has access to operator has used it and posted the results because they'd like got something cool on or whatever. Right?
Speaker 3:There's some sense too where like, you know, Ghibli is awesome because it takes three seconds to look at like an image. Right? And be like, oh, wow. Like, that's beautiful. Like, like made like, you know, your like engagement photo like way cooler.
Speaker 3:Right? Like Yeah. It's, like, really quick. Then I do think that, like, some of the most important agents, like, some of the, know, like, most important things that we'll build over the next, like, few years might not have results that you can, like, look at it in three seconds to be like, wow. Like, great job model.
Speaker 3:Right? And in fact, I actually think that is, like, the economic potential and power of these systems increases. Maybe it actually becomes slightly harder in the limit to tell the difference between, like, you know, capable systems and very, very capable systems. Right? Just as as models get better, it might be a bit harder to tell the difference between, like, the good model and, like, a brilliant model.
Speaker 3:Right? The cool thing is though, is that, like, you know, it's a a nine's a reliability scaling problem. That as you, like, kind of, like, push these nine's out and as you, like, make them more reliable and then more contexts, at some point, they are just doing, like, a sizable chunk of economic labor. Right? Yeah.
Speaker 3:So, you know, do I expect to get the momentum? I'm unsure. But I do expect, like, you know, these things to to just provide a lot a lot of economic value, if that makes sense.
Speaker 1:Yeah. Yeah. Makes a lot of sense.
Speaker 2:Beautiful. Well, thanks for joining. Love this conversation.
Speaker 1:Yeah. This is great. We gotta have you soon.
Speaker 3:Those are awesome.
Speaker 1:This is fantastic. Yeah.
Speaker 2:You're welcome anytime. You're welcome anytime.
Speaker 1:Great conversation. Good to see you guys. Happy talk you soon.
Speaker 2:Congrats on the milestone and keep it up.
Speaker 3:Yeah. Thanks. Likewise, guys. Talk to
Speaker 2:you soon. Boom. Sci's gone.
Speaker 1:Bye.
Speaker 2:Fantastic.
Speaker 1:Yeah. Yeah. Great conversations. The OpenAI team, great guys. And and fun that they can talk as freely as they can given their role, you know.
Speaker 1:Obviously, it's very important and and valuable company and
Speaker 2:it's usually not It's it's most companies just period would not allow
Speaker 1:Yep. Posting or any of that stuff.
Speaker 2:Well, posting is one but but joining
Speaker 1:a live A live stream, it's
Speaker 2:live show.
Speaker 1:High stakes.
Speaker 2:That's, you know, high stakes.
Speaker 1:But it's fun. It's great.
Speaker 2:But it's fun. And I thought they had, you know, fantastic answers.
Speaker 1:Well, let's
Speaker 2:Makes me very optimistic.
Speaker 1:Let's go through another autopilot, Numeral, which puts sales tax on autopilot. They do. Spend less than five minutes per month on sales tax compliance backed by Benchmark. Numeral's a new sponsor on the channel,
Speaker 2:and we wanna give a shout out to Numeral. Sam and Matt. For those who don't know, Numeral works with over a thousand different ecommerce and SaaS businesses including a few of our favorites.
Speaker 3:Yeah.
Speaker 2:Ridge and Graza Yep. And many many more.
Speaker 1:If you're running an ecommerce business or SaaS company you
Speaker 2:have to taxed to on numeral. Not use numeral.
Speaker 1:I mean, it's funny that they're saying spend less than five minutes per month because I feel like the alternative is not spending more time, but it's billing some, like, tax lawyer, like, $10,000
Speaker 2:Yeah, it's actually insane. It could be
Speaker 1:really expensive to do sales taxes pre So big shout out to Numeral.
Speaker 2:And a lot
Speaker 1:of Go check it out.
Speaker 2:Yeah, you want to get ahead of this early. I think a lot of founders both in e comm and SaaS aren't really taking this
Speaker 1:Yeah.
Speaker 2:Seriously until they're at scale.
Speaker 1:Totally.
Speaker 2:There's real money on the line. Yep. So start early, start often.
Speaker 1:And we have some breaking news that was sent to us about the Newsmax IPO. Have you followed this story?
Speaker 2:Newsmax. Newsmax. I'm Newsmaxing.
Speaker 1:So Newsmax so Newsmax is up a 10% today. It IPO ed yesterday. It's up 1900% in two days from a $10 price. It's over $200 now. The price is all over the place.
Speaker 1:It's a twenty six year old company, digital media company founded by Christopher Ruddy in 1998. It has been variously described as conservative, right wing, and far right. Newsmax, MediaVision include cable and broadcast channel Newsmax TV. And I can't even keep track of all these right wing media companies because there's so many. Like, the Truth Social far like like, there's Truth Social, there's Parler, there's Gab, and Newsmax, and a whole bunch of these.
Speaker 2:I'm so happy we never discussed politics.
Speaker 1:But But
Speaker 2:we will discuss an IPO.
Speaker 1:We will discuss an IPO. Especially one that's popping. Funny I saw some funny jokes about how CoreWeave priced exactly at $40, which is what you wanna do to not leave any money on the table. Newsmax obviously left a ton of money on the table by by pricing the IPO so much lower than the price that it's floated at now. Wait.
Speaker 1:So would have, in theory, sold shares and put a lot more money on the balance sheet at a lot lower
Speaker 2:So it's being valued beyond $30,000,000,000 right now. It seems totally reasonable. One x.
Speaker 1:That's great. Well, if they hit a stumbling block, maybe Jeremy Giffon will have to come bail them out. He's hiring an investor, and we wanna highlight it here on the show. He says you'll be the first hire working with him on mow on the most interesting complex and asymmetric special situations in tech. You'll be a good fit if you, one, trade annual letters like Pokemon, two, are high paced and biased to action, three, are unapologetically money motivated.
Speaker 1:Four, have created and sold a product. Ideal ideological ideological minority at a top 10 school debated risking it all with an SBA loan, bought stock before you could drive, work in PE but long for higher MOIC, work in VC, but long for higher ROIC, went through YC, but yearned to invest, can get a meeting with anyone on earth, made money online in high school, value heterodoxy over consensus, value making money over being right. Steve Schwartzman defines eights as those who can follow marching orders, nines as those who can execute and strategize, and tens as those whose can sense problems, design solutions, explore new directions, and make it rain. Eights and nines need not apply for this job. This role will have an equal focus in sourcing, analysis, execution, and operations.
Speaker 1:In essence, you'll wear every hat as we build the firm together. Every hat. Be all encompassing and demanding, but autonomous and yours to shape. You'll invest and transact with the best operators and financiers in the world and produce
Speaker 2:workers dogs.
Speaker 1:At an exceptionally high standard. So go hit up Jeremy and switch your business to ramp.com. And and go hit up Jeremy and let him know that the technology brother sent The TBPN sent you.
Speaker 2:Undoubtedly dream job and working for Jeremy will change your life.
Speaker 1:Yeah. Used to be a tiny man but now he has big aspirations.
Speaker 2:That's true. Worked for Tiny. Worked for Tiny. Always been the side Always going to side's board. Yeah.
Speaker 2:But some of the smartest people in the world called Jeremy to get advice before they make important decisions. That's true. And so being able to work for him is like many people have said it'd be like getting 2,000 MBAs at once.
Speaker 1:Yeah. So It's like buying shares in CoreWeave in 02/2004.
Speaker 2:Yeah. Right? Yeah. It's like buying Solana in '98.
Speaker 1:Yep. Exactly. Anyway, where should we go to next? Oh, I mean, I don't know if we actually I think we did post this in the timeline. You might not have seen it, but Jeremy Giffon obviously does special situations.
Speaker 1:If the if your company raises too much money from VCs, you get in you build a decent business, but you're just underwater on the pref stack. He will come and and We shoulda had. Stuck unstick your business. And it's somewhat adjacent to, kind of injury lawyers if you have a cap table accident. So we put up a fake billboard on the +1 01 that said, cap table accident.
Speaker 1:Call Jeremy Giffon. Put his real phone number on there. He didn't like that, so he censored it out before he posted it. But if you're looking to buy a billboard, go to Adquick. Adquick dot com.
Speaker 1:Out of home advertising made easy and measurable. Say goodbye to the headaches of out of home advertising. Only Adquick combines technology out of home expertise and data
Speaker 2:to enable
Speaker 1:efficient seamless ad buying across the globe.
Speaker 2:So, anyway. You can be a startup, an agency. Anybody can Yep. Go and leverage AdQuick, and you'd be silly not to.
Speaker 1:Yep. And did you I think you threw this in the chat. Angry Tom says, it's over. Meta just announced Mocha, a new model that turned text or voice into super realistic talking characters. There's no way to tell anymore.
Speaker 1:And we actually saw a a fan of the show turn us into Lego characters, which is very cool. And I think this, like, repurposing and giblification, we've only seen the very beginning of it. And it's it is interesting. We we talked to Sameer about the VTuber thing. This type of model, obviously, it's very expensive, very slow right now.
Speaker 1:But in the future, you could have a model that's running on your livestream and we could be LEGO characters the whole show one one day if
Speaker 2:we want people that watch the whole show Ghibli mode.
Speaker 1:Ghibli mode.
Speaker 2:You'll be able to
Speaker 1:Ghibli filter.
Speaker 2:In the sidebar. Yeah.
Speaker 1:Yeah. Just like a
Speaker 2:cold Yeah. I I thought this was interesting. I think Meta is struggling from a lack of quality posters on their team because
Speaker 1:They don't really have a Nobody's pumping Well, they're probably over on threads and Zuck is posting on Instagram. He he's he's actually very good at going direct. He posts the direct camera videos when he launches new stuff. But I I want more.
Speaker 2:Thought some news that was interesting. So Brett Goldstein launched a new all in one CRM today on the anniversary of Gmail launching.
Speaker 1:They launched on April 1, oh yeah.
Speaker 2:April 1. So apparently back in the day, they launched and nobody actually believed them. Because like launching a product on Yep. April 1 is not a good idea. Huberman launched a funny product this Contact
Speaker 1:lenses.
Speaker 2:Blue light blocking.
Speaker 1:I fell for one April fool's joke today. Very sad. Which was? For Trey's.
Speaker 4:Yeah.
Speaker 1:He said he was buying a grain silo in Ohio, I was like, yeah. That that's believable.
Speaker 2:Yeah. Yeah. Too believable. You can't
Speaker 1:make it He all he he was just teeing himself up for a joke that he wanted to be a serial serial entrepreneur with a moat. Yeah. Was it was
Speaker 2:So Brett says Whatever in my head. The email form factor hasn't changed at all since then. After many attempts, it's clear that we're actually just looking at email wrong. And so what Brett is building is a email companies that's sort of like embracing email as a database. So like basically building applications on top of the email, everything from like to do list to project trackers
Speaker 3:Mhmm.
Speaker 2:To news, things like that. So that's exciting. And then clearly someone else had a similar idea. Notion also launched Notion Mail, a new email service also coinciding with the launch. I don't I think the strategy of launching products in general on April 1 when every company is doing fake product launches is I get the, you know, sort of like launch on
Speaker 1:Underrated, you think?
Speaker 2:I don't think it's I I think potentially, you know, you just gotta keep launching. Right? So like Brett should just keep launching over and over Totally. Over notion should figure out, you know, how to launch new features, things like that. But anyways, cool to see these two launches.
Speaker 2:You know, Superhuman was fantastic when it launched. I remember the FOMO from everybody of like, how do I get Yeah. An invite or whatever.
Speaker 1:I really And then I really want a new AI first email client. And I know that, yeah, everyone's gonna say, like, Gmail will kill Google will kill this in a few years. But, like
Speaker 5:Yep.
Speaker 1:I'll pay you a hundred dollars a month for two years straight. And, like, I think that's a reasonable business.
Speaker 2:Yep.
Speaker 1:Because I I'm drowning in email. They're not filtered properly. Like Totally. There's, like, these buttons now where it's just like, do you want Gemini to summarize this?
Speaker 2:Like, crazy. No.
Speaker 1:I can scan. That's not what I'm looking for. I'm looking for better spam filter.
Speaker 2:Yeah. It's crazy that Gmail values a newsletter in the same way as, like, you know, some critically important
Speaker 1:It it it's crazy. I mean, they do put it in, like, a separate box sometimes, but it's very tricky. It's not good. And, yeah, I mean, I I wanna almost find a product. I bet it exists.
Speaker 1:I wanna find, like, a new RSS reader that I can basically take all my newsletters and just put there. So it's like a different app, essentially, because, like, I don't want it in my email inbox at all. And then I want a proper unified inbox because Gmail and Chrome now want me to use, like, different Chrome profiles with different log and it's like it's just getting so confused. I just want all my email in one place. Anyway Yeah.
Speaker 1:How'd you sleep last night?
Speaker 2:I slept.
Speaker 1:You went to bed at just accidentally Let's get this drama. That's drama.
Speaker 2:Hopefully, that's not copyrighted.
Speaker 1:I'm gonna No. No. No. No. No.
Speaker 1:Vivaldi's three hundred years old. We're good.
Speaker 2:We're good.
Speaker 1:How did you sleep? Tell me, Jordy.
Speaker 2:Eighty eight. Oh. How do you sleep, John?
Speaker 1:I slept 75. Nights that fuel your best days, turn any bed into the ultimate sleeping experience. Folks, go to 8sleep.com. Use code TBPN.
Speaker 2:Use code TBPN. Get $350 off. I'm very excited. We're gonna be having Matteo Yeah. Cofounder of Eight Sleep on the show soon.
Speaker 1:Wanna break it down?
Speaker 2:We're gonna make him our official sleep coach. Yeah. Not just a sleep
Speaker 1:expert on the I've worked for I you know, the Eight Sleep's so helpful, but at this point, it's doing it's it's doing everything it possibly can, and clearly, I need to do more. Yeah. Like like, the reason I'm not getting one hundreds, it's not on Eight Sleep anymore. Autopilot has optimized optimized so much, and it's pushing a bolder up.
Speaker 2:It's pushing you harder.
Speaker 1:It's pushing you harder. I wanna be a consistent one hundreds, and I even know the stack that I need to get there. Probably taking less stimulants all day
Speaker 2:Waking up between four and five daily Yep. Means that I basically am getting into bed when it's light out now after the time change. And it just feels so silly
Speaker 3:It is.
Speaker 2:Like to be getting into bed and you're like, oh, I guess I'm five years old now. But Yeah. That's Whatever it takes. Anything for the show.
Speaker 1:Yep. Anything for the show. Should we close out with this this Dwarkash Patel post? I thought this was interesting. It's a long one, but I thought, yeah, I thought it'd be good to read through.
Speaker 1:He says, that feeling when Guern casually articulates a more insightful framework for thinking about your life's work than you've ever sketched out yourself. Guern writes six days ago, and there's a screenshot. You can see it as an example of alpha versus beta. When someone asks me about the value of someone as a guest, I tend to ask, do they have anything new to say? Didn't they just do a big interview last year?
Speaker 1:And if they don't, but they're big, can you ask them good questions that get them out of their book? Big guests are not necessarily as valuable as they may seem because they are highly exposed, which means that which means both that, one, they probably have said everything they will they will say before, and there's no news or novelty. This is really important when you're doing interviews. You need to get people off their book, I think you're very good at that with the asking people about the current thing, essentially. Yep.
Speaker 1:Because there's a lot of people that do they've they've they've passed their timeless wisdom seven different times on various podcasts, but they haven't talked about humanoid robots yet. And so you get them on that and it's interesting. Or they are message disciplined and
Speaker 2:careful to talk to the book. Asked Senra why our great allocators cannot invest in the right life partner.
Speaker 1:That's true. Good point. In this analogy, alpha represents undiscovered or neglected interview topics, which can be extracted mostly just by finding it and then asking the obvious question. Usually, interviewing new people, beta represents doing standard interview topics slash people, but more but much more so harder, faster, better, and getting new stuff that way. That's us.
Speaker 1:Lex Friedman podcasts are an example of this. He often hosts very big guests like Mark Zuckerberg, but nevertheless, I will sit down and skim through the transcript of two to four hours of content and find nothing even worth excerpting for my notes. Friedman notoriously does no research and asks softball questions. They're going hard on legs. Oh, it's rough.
Speaker 1:And invites the biggest names he can get regardless of overexposure. And so if you do that, you will get nothing new. He has found no alpha, and he doesn't interview hard enough to extract beta. So he's sort of the high expense ratio index fund of podcast interviews. I think that there is I I think this misses the fact that, like, for a lot of Lex Reasons audience, like, you might not have heard that Mark Zuckerberg, and it's fun to just listen to him for a couple hours.
Speaker 1:So Yeah. I still think there's value there. Anyway, Sarah Payne, on the other hand, which was a guest on DoorDash and blew up, seems to have been completely unknown and full of juicy nuggets and is, like, winning the lottery. You can make your career off a really good trade, like, pain before it gets crowded.
Speaker 2:Great trade.
Speaker 1:And we gotta have her on. We're taking her. She's coming on our show next to our cash. Just kidding. But but you're welcome to come opine on the current thing, Sarah, if you'd like join.
Speaker 1:But we're not gonna do I wanna hear anywhere near as deep as torture.
Speaker 2:Thinks about UniTree Yeah. Robotics.
Speaker 1:Yeah. Break it down for us. Break it down. What's your p do? However, if, if another successful podcaster has her on, they will probably not discover Payne as their most most popular or growth productive guest ever.
Speaker 1:The well is dry. Payne may have more to say someday, but that day is probably closer to five years from today than tomorrow, and that's a very good point. So a good interviewer adopts a optimal foraging mindset. Once you have harvested a patch of its delicious food, you have to move on to another patch which hasn't been exhausted yet and let the original patch slowly recover. So a great guess for Dwarkesh's blog would be, say, Hans Moravec or Paul j Warbus.
Speaker 1:Moravec hasn't done anything publicly in at least a decade and is fallow, while Warbus, has been more active and in the public eye, but still not much and is such a weird guy that just about any question will be interesting. Reich was even also was also a get a good guess because while Reich is very public in some senses, he's written popularizing books, even he is still obscure. Almost none of what he has published is well known, and he is involved in so much fast paced research that even the book is now substantially obsolete, and he has a lot of new stuff to say. And Reich will have more stuff to say if revisited in, say, two years for an update. So a harvester will be making a note to revisit him if the current crop of interview candidates in the pipeline is looking marginal.
Speaker 1:A a difficult and mediocre guest would be Tony Blair. He can surely say many interesting things about the current geopolitical context and his work since being PM, but he is super experienced career politician who has survived countless question times and may eat you for breakfast and exploit you for our for ulterior purposes rather than vice versa. Similarly, Mark Zuckerberg and Satya Nadella are tough nuts. There's meat there. But are you willing enough to bring down the hammer, or will you settle for a mediocre result that mostly just fills space and is not a must watch?
Speaker 1:A bad guest might be someone controlling and extremely PR savvy like mister beast. This is the sort of guy who will give you a bad interview pushing his book shamelessly and then might wind up spiking the interview anyway if he felt he wasn't getting enough out of it and just drops it as sunk cost. Though it was weeks of work on your part and blows a hole through your schedule and that's not his problem. That's why we only spend two minutes texting guests and then no minutes prepping so we just have them on for fifteen minutes and try and get some
Speaker 2:We do it live.
Speaker 1:Throw them off.
Speaker 2:No. I mean, I I It's
Speaker 1:interesting feedback.
Speaker 2:No. We try to it's less interesting to talk to a founder and have them only talk about their business
Speaker 6:I agree.
Speaker 2:Because it's just like Yep. You you sort of know And you've it elsewhere. Yeah. You've heard it elsewhere. It's on their website, etcetera.
Speaker 2:It's much more interesting to kind of get their opinion on the market broadly. But this is great. In other news, Circle has officially filed for an IPO.
Speaker 1:Oh, that was the Polymarket ticker.
Speaker 2:Yeah. So this was cool. So I actually have the Polymarket pulled up right
Speaker 1:now. Yeah. Did you just jump
Speaker 2:in? It had been sitting at a week ago is that, yes, was at a Circle IPO in 2025 was at 54 percent. And yesterday morning, it just absolutely popped up to 88%.
Speaker 1:That makes sense.
Speaker 2:And then now it's sitting almost at a hundred. And, yeah, I'm excited to to like dive into this. We should try to do an s one breakdown tomorrow.
Speaker 1:Cool.
Speaker 2:And, yeah, this is a big one.
Speaker 1:Well, congratulations to everyone over at Circle.
Speaker 2:And that is a good place to close out the show. Remember, if you wanna get a bottle of Dom Perignon
Speaker 1:Leave us five stars on Apple Podcasts.
Speaker 2:Be creative. Can put an ad.
Speaker 1:Be creative. Put an can
Speaker 2:do whatever you want.
Speaker 1:Send it to us. Tweet it at us, and we will send you this delicious bottle of dumping.
Speaker 2:And we are going to make a decision by the end of day tomorrow.
Speaker 1:And we will also send you the Zbiotics to go with it.
Speaker 2:There you go.
Speaker 1:So you're not hungover.
Speaker 2:Today was fantastic.
Speaker 1:It was great.
Speaker 2:Not fun. I can't wait for tomorrow. I'm so happy it's only Tuesday.
Speaker 1:Yeah. Me too.
Speaker 2:It'd be rough. Three more days.
Speaker 1:Big shows ahead of us.
Speaker 2:Big shows.
Speaker 1:Thanks for watching.
Speaker 2:You, folks. Have a great day.
Speaker 1:Bye.