Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.
You're watching TBPN. Today is Thursday, 06/19/2025. We are live
Speaker 2:from the TBPN Ultra Dome. The
Speaker 1:temple of technology. The fortress of finance. The capital of capital. We have a great show for you today, folks. There's some breaking news that's dropping right now.
Speaker 1:I think we gotta go to the printer cam.
Speaker 2:Really?
Speaker 1:Because we have an update from friend of the show. Let's see if this works.
Speaker 2:Do I need this?
Speaker 1:No. It's we need a moment of silence, Gong. But because it's about the Elon Musk news out of SpaceX. But we got an update from friend of the show, Ashley Vance coming in here hot. He says, I happened to be at Neuralink last night when Starship went boom, and so was Elon Musk until well past midnight Pacific time.
Speaker 1:He was in three hour plus he was in a three hour plus long meeting when the explosion happened. Meeting ended. I assume that's when he learned about it. And then, he went back to work. Wow.
Speaker 2:An absolute dog.
Speaker 1:What a grinding.
Speaker 2:I mean, video was absolutely insane. It was. We will cover it a little bit. Yep. You had shared a transcript from it.
Speaker 1:Oh, yeah.
Speaker 2:This is Earlier this morning. I had seen the the video of it going, boom. Yep. You shared this transcript where one of the engineers is saying, hey, really quick Sawyer, we just observed a couple of vents coming from the common dome in between a LOX tank and a methane tank. And from this angle, it almost looks like the methane tank is gone.
Speaker 2:Then he actually says question mark? Is that normal?
Speaker 1:He says
Speaker 2:Jack says that normal?
Speaker 1:Question mark.
Speaker 3:Yeah, it's crazy.
Speaker 2:I'm seeing some venting. Is that is that unusual? Is that normal? And the guy goes, yeah, it's probably normal. Then they literally say face
Speaker 1:have just You have the video? I I just have the video.
Speaker 2:Yeah. Let's play it.
Speaker 1:Let's play this. I have no idea if this is actually related. We'll have to get someone on the show to dig into, like, exactly what happened. I'm sure there'll be a post, like a post mortem on the explosion.
Speaker 2:They did say famous last words and then shortly afterwards
Speaker 1:It explodes.
Speaker 2:The entire.
Speaker 4:It is a crazy, crazy video.
Speaker 5:Just want quick, Sawyer. We
Speaker 1:Let's see if we can pull this up. In the meantime, let's tell you about rampramp.com. Time is money. Save both. Easy to use corporate cards, bill payments, accounting, and a whole lot more all in one place.
Speaker 1:Go to ramp.com to get started, of course. The other two major news stories we wanna cover today, $10,000,000,000, the price to buy the Los Angeles Lakers, 15,000,000,000, the price to buy meta and AI leadership team. Fantastic post by Alex Conrad. Let's I've just observed a couple of Third show.
Speaker 5:Of events coming from the common dome in between the LOX tank and the and the methane tank. And from this angle, it almost looks like the methane tank is gone? Question mark?
Speaker 2:Is that normal?
Speaker 6:Jack, is
Speaker 7:that normal?
Speaker 1:And then I actually edited it out
Speaker 7:over there.
Speaker 5:I'm seeing some some venting coming from in between the methane tech and the locks thing. Is that usual? Is that normal?
Speaker 1:Yeah. That's probably normal.
Speaker 5:Keyword probably. Famous last words.
Speaker 7:Yeah. Weasel words.
Speaker 5:Famous last weasel words.
Speaker 1:These guys are just throwing out on a livestream watching watching, like, a static firing test. This is not a launch. They're not gonna they're not trying to launch the rocket. They're just they're just putting on the on the test stand
Speaker 4:Yeah.
Speaker 1:Firing it up to make sure that everything works, and it just completely exploded. We'll go deeper into that and and some of the reaction and the news in a little bit. But in the meantime, let's talk about the other piece of breaking news that came out of the printer just after we got off the stream yesterday.
Speaker 2:Oh, right.
Speaker 1:Right. This is news from the information. Meta is in talks to hire former GitHub CEO, Nat Friedman and Daniel Gross to join AI efforts.
Speaker 2:And partially buy out their venture fund.
Speaker 1:Yes. So there's a ton of details here. So let's read through the information article, and then we'll go to some of the reactions. So Meta Platforms is in advanced talks, not just talks, advanced
Speaker 2:Advanced talks.
Speaker 1:Has that been defined quantified? What does that mean? Like are we past coffee meeting? Is this a thirty minute? Is this a two hour conversation?
Speaker 1:Long are the talks until they become advanced? Yeah. Must
Speaker 2:know. Know, exact numbers are being thrown around. It's very possible.
Speaker 1:Yep. So they they're thinking about bringing in Nat Friedman, Daniel Gross to help lead AI efforts. Part of those talks, Meta is in discussion about partially buying out Friedman and Gross's venture capital firm, NFDG, which holds stakes in top AI startups and is worth billions of dollars on paper. If the talks are successful, Gross would leave Safe Super Intelligence, which he cofounded with former OpenAI chief scientist Ilya Sitzkever last year. At Meta, Gross is expected to work on mostly on AI products, while Friedman while Friedman's remit is expected to be broader.
Speaker 1:Both Gross and Friedman are expected to work closely with Meta CEO Mark Zuckerberg and Scale AI CEO Alexander Wang or Wong, whose hiring by Meta was finalized last week in a $14,300,000,000 deal. Big numbers.
Speaker 2:Big numbers being thrown
Speaker 1:around. I think this is gong worthy. Even though we're just in advanced talks, We gotta hit the gong.
Speaker 2:We try not to hit the gong for advanced talks,
Speaker 1:but scoops are scoops. It's it's such a big number.
Speaker 2:Fantastic. Strong hit.
Speaker 5:Part of the talks.
Speaker 2:Yeah. I guess some, you know, immediately couple things I was thinking about when I saw the headline. I I don't know if this is common knowledge, but my understanding was that some of the money from NFDG was Zucks. Yeah. Right?
Speaker 2:So they were already investing
Speaker 1:There was an
Speaker 2:behalf of Zuck to some degree. So this shouldn't be a huge surprise. And then I think the bigger thing is what does this say about SSI? Right? If if DG is willing to leave SSI despite it being, you know, such a young company and already about valued, I imagine DG stake is in the billions of dollars there.
Speaker 2:Yep. So to leave that and and go to Meta Says something. I don't know exactly what it says.
Speaker 1:I I I think it could potentially say a few things. One is that maybe artificial intelligence is more of a sustaining innovation than a disruptive innovation. And so that just by training a fantastic model, you're not immediately going to be able to overcome the network effect at Meta. And so Meta is maybe potentially a better place to go, you know, really reap the rewards of artificial intelligence. That's kind of a signal because, you know, no one at Google was really thinking about joining Yahoo.
Speaker 1:Right? There wasn't a lot of flow that direction.
Speaker 4:Yeah.
Speaker 1:It was like, we're onto something. We are going to disrupt, you know, same thing with with Amazon. I'm sure Bezos wasn't losing people to Barnes and Noble. Right? I think this is this is the analogy.
Speaker 2:Noble threw out a couple max contracts. They've got a couple mercenaries. But but, yeah, I mean, if you're You
Speaker 1:know, this was truly disruptive. You would think that you would say, well, I don't I definitely don't wanna be with the incumbent. I don't wanna be in the legacy player because Yeah. They there's nothing that they can do to capitalize on the new wave of technology. And so there's been this question about AI clearly an incredible technology.
Speaker 1:Clearly, you know, like, the greatest inventions, and it's up there with electricity and fire. Like, it's it's really, really cool. The computers talk now. It's incredible. At the same time, what what is the market dynamic that drives how this technology will accrue value in various places?
Speaker 1:Who will the winners be? A whole bunch of startups? Will there be a monopoly player around it comes from a startup? Will there be a monopoly player and then sustaining innovation in every other mag seven? And this is the question that everyone has been talking about for years, for a couple years now.
Speaker 1:This is what Ben Thompson writes about, what we talk about all the time.
Speaker 2:And I think people are gradually waking up the idea that like it's possible that a lot of the value creation at the foundation model layer will happen at OpenAI because of their consumer products. Yes. Right? And this aligns with with Sam's piece from last week, the gentle singularity. Yep.
Speaker 2:It's basically saying like we created intelligence Yep. And it's less weird than we thought. Yeah. Right? Yeah.
Speaker 2:Which is a step back from from how things were were talked about and and is a stark difference than than maybe AI twenty twenty seven which is, you know, super extremely AGI pill than just saying, you know, we're gonna continue accelerating. Yeah. And I don't know. I think I think it's I think it's right to kind of read into this and and it's it's two things can be possible. It's possible that Ilia will create, you know, very important lab with SSI.
Speaker 2:But it's also possible that that if if they might never grow into a $30,000,000,000 valuation.
Speaker 1:Is that where they are now?
Speaker 2:They're currently priced at $30,000,000,000. So for DG to to leave Yeah. As a co founder of that company. Sure. I'm sure he he's getting you know, he'll get a a 10 figure package if he goes to Meta.
Speaker 2:Yeah. If this goes through, but he's also leaving, I would imagine billions of dollars of of, you know, shares.
Speaker 1:Yeah. This seems like very rumor mill at this point. Like, this could go a bunch of different ways. It could just be talks and maybe they just come on as like advisors or something Or they join the board. Like Yeah.
Speaker 1:Meta has a board that includes people that don't work at that company. And Yeah. And and they add a lot of value there. So that could happen. It could also be that Meta winds up acquiring SSI.
Speaker 1:That would be a wildly different take on this. Yeah. Or it could be that they leave. The information seems like somewhat confident about this, but you know
Speaker 2:it's only a couple sources. SSI, multiple co founders. It's very possible that as even since the company was started, certain members of the co founding team like Ilya Yeah. You know, to build what what they see as super intelligence Yep. And it's possible other members of the team are like, yep, this is like Yep.
Speaker 2:More software. Yep. We're we're gonna bend this out in a bunch of different places. Yep. Ultimately, you know, it it just, I don't know.
Speaker 2:I you don't typically see people get off rocket ships. I agree. Yep. Especially cofounders unless there was an extreme, you know, rift.
Speaker 1:There's another side to this, which is there's a question about what structure even even if we're even if the goal still is superintelligence, what is the what is the corporate structure and the and the capital formation structure that delivers superintelligence? Because we saw this with OpenAI when it was a nonprofit. There was simply no way to marshal a $10,000,000,000 donation to a nonprofit for a large GPT 4.5, GPT five level training run. Yeah. There was no way to marshal that type of capital.
Speaker 1:Like, the biggest the richest people in the world had already donated a $100,000,000, and there was not really a lot of appetite for, yeah, next year I'm 10 x ing my donation.
Speaker 6:Yeah.
Speaker 1:And so and so they had to become a for profit. Yeah. Then you look at at the flywheel of what it takes to continue to develop, and continue to do these training runs, and continue to invest in reinforcement learning. And it feels like you need a data feedback loop, and you also need a financial feedback loop to be able to justify more and more investments. And so so if we're and we're gonna talk to Mike at ARC AGI, and there's this interesting thing that we heard, which was that the reason that the foundation models are not able to one shot ARC AGI right now is because they're all just, like, doing a nice thing and not reinforcement learning on
Speaker 2:it. Mhmm.
Speaker 1:But if they actually were did like some fine tuning around it and they were like, hey, we wanna knock this model off, they could. And and what that tells me is that for any really well defined problem like chess or Dota two or Go or League of Legends, Like, you can go and say, hey. We're doing a specific training run for this one problem, and it's gonna get really good at it. The the weird thing is is that the economy and the, like, the global value creation chain from humanity is potentially extremely long tailed. There's potentially not just like five skills, like, oh, yes, you know, IMO level math, and you're good, and you generalize.
Speaker 1:You you might need to go and dig into all these different different pieces of value. Yeah. And and having a feedback loop, or an economic model like what OpenAI has with their app that generates a ton of revenue, or like what Meta has where they can deploy these products in all sorts of different ways, and get billions of people using them very quickly. That actually might be a a more like, it might be the only way forward. You might not just be able to go into monk mode, come up with the perfect algorithm, and then train it on some, like, medium sized cluster.
Speaker 1:You might actually need to just scale energy, scale data center capacity, and scale users smoothly for decades to get there. So I don't know that this is this is updating my, like, probability of superintelligence ever Is It's more like how will
Speaker 2:it live? Labs that are losing billions of dollars a year can the capital markets support and for how long?
Speaker 1:Exactly.
Speaker 2:Right? Yep. You have x AI, thinking machines, safe super intelligence.
Speaker 1:Yep.
Speaker 2:You know, Anthropic is kind of an open AI or in their own categories and that they are generating a lot of revenue.
Speaker 1:It's also hard because it's not like biotech where if you come up with a machine learning algorithm or you come up with, you know, the transformer, you patent it, and then you just make money off of it forever. Like, that's not the way these
Speaker 2:these innovations Until you step to renew
Speaker 7:Yeah. Yeah.
Speaker 2:That's not Novo Nordisk.
Speaker 1:Like, if that was the case, I would actually be maybe more bullish on SSI because I would say, well, Ilya is clearly like an incredible researcher. If he goes into, you know, his team and comes up with the next great training paradigm
Speaker 4:Yeah.
Speaker 1:And then patents that and is able to license that to Google and OpenAI, that could be extremely valuable. But that's just not the way the structure of the is. Yeah. It's
Speaker 2:not like drug development.
Speaker 1:Exactly. Anyway, it's a fascinating story. There's been a ton of reaction to this. Nick says, this is somewhat related. Carpathi literally said Meta's llama ecosystem is becoming the Linux of AI, and you're black pilling?
Speaker 1:And so this is kind of like a narrative violation.
Speaker 7:Lot people We should
Speaker 2:get little bit more into the article because it does does give
Speaker 1:some color.
Speaker 2:So Friedman has been involved in Meta AI's efforts for at least the past year. In May 2024, he joined an advisory group to consult with Meta's leaders about the company's AI technology and products. Earlier this year, Zuckerberg asked Friedman to lead Meta's AI efforts altogether. The person familiar with the discussion said, Friedman declined but helped brainstorm other candidates including Wang. Mhmm.
Speaker 2:While Zuckerberg was skeptical Wang would leave scale, Friedman convinced him a deal was possible, said a second person with knowledge of the discussions. As the Wang hiring came together, Zuckerberg approached Friedman again. This time, Friedman agreed to a deal of his own. He is currently expected to report to Wang who is roughly twenty years his junior. Both men will be a part of a small group of meta leaders that Zuckerberg refers to as as his management team or m team.
Speaker 2:For Gross, the talks with Meta put him in awkward position with SSI. A startup form with the goal of building a leading AI company insulated from short term commercial pressures. So again, SSI's strategy from the beginning is saying we're not gonna release anything until we create super intelligence.
Speaker 1:Just think it might be the nature of the economy and the nature of of artificial intelligence and the structure of the market that might mean it is impossible to insulate yourself from short
Speaker 2:term commercial. Yeah. The the question is you have billions of dollars on your balance sheet and Yeah. You hypothetically could just do AI research forever just off of the interest yield alone. Yeah.
Speaker 2:Except for the fact that if you wanna compete from a scaling standpoint, you have to spend billions of dollars on GPUs and, you know, data centers and and training runs and things like that. So, it'll see there's a very real tension there that will have to be resolved somehow. Yep. The startup SSI hasn't yet launched a product or described in detail what it planned to build. Gross's departure for Meta would damage an important investment for some top venture capital firms.
Speaker 2:In April, SSI raised 2,000,000,000 at a $32,000,000,000 valuation from investors such as Green Oaks, Andreessen Horowitz, and Lightspeed Venture Partners. It has also raised money from Sequoia Capital. They basically got everybody together. Together, Friedman and Gross have invested in some of the buzziest AI startups including search startup Perplexity and robotics startup, The Bot Company.
Speaker 1:That's Kyle Bot's company.
Speaker 2:The firm had more than 2,000,000,000 of assets under management as of the last year though that figure is likely higher now with the increase in value of some of their startups. So It's wild. Anyways, this feels crazy but Yeah. Nat Friedman independently going to work at Meta is not that crazy. Yeah.
Speaker 2:I feel like the craziest part is you know, is someone like DG going from SSI to But at the same time, you know, it's very possible that SSI and Meta could work out some type of relationship and and maybe that's not getting reported yet.
Speaker 1:What's interesting is that both of these guys, Daniel Gross and Nat Friedman, were both at one time thought to be, like, future really, really significant leaders in mag setting companies. So Daniel Gross, he started an artificial intelligence company. I believe went through YC, and then, or maybe he went to YC after, but he sold it to Apple. And then Apple, everyone was kind of like, wow. Now that he's in there leading AI at Apple, he's going to be kind of like this young incredible talent.
Speaker 1:Maybe he'll be like the next Steve Jobs. Maybe he'll like take over the company one day.
Speaker 4:Yeah.
Speaker 1:People were kind of like waiting for that but I didn't seem like Apple was really set up for He
Speaker 2:accepted into YC in 2010. Was the youngest founder ever accepted. Yeah. Yeah.
Speaker 1:Yeah. And then he went back as a partner shortly after because he left Apple. But there there is a different, like, fork in the road where Daniel Gross is, like, next in line to run Apple after Tim Cook if they were set up to empower someone young, which I don't think any of these big companies really are necessarily, maybe except for Meta. And then Nat Friedman has the same thing where he's he's CEO of GitHub. He goes into Microsoft.
Speaker 1:You know, it was always a possibility that, you know, GitHub's really important for this $500,000,000 business. It's growing. It's code gen. Like, he's set up in the tech industry. Like he could have potentially taken over for Sacha at some point.
Speaker 1:Yeah. Yeah. That's just co pilot. And so so there was a world where you could see them at the ranks. But we don't we don't think about it this way because most of the succession plans in in manager mode big tech companies are more managers.
Speaker 1:We don't tend to acquire founders and let them take the helm. But Zuck, it's not like he's stepping aside by any means, but he's very much leaning into this idea of like, there is something special about these founders, these people who have built companies, these people who are at the heart of the technology really, really in the midst of things. Get them on my side at any cost. And I love it. I think it's amazing.
Speaker 1:I think
Speaker 2:Nat and Daniel both want to make a dent in the world. Totally. Especially in the context of AI. Right? So they're not gonna wanna go to Meta and just cruise and make ads 10% better.
Speaker 2:Yep. Or, you know, that kind of thing. Make it easier to generate
Speaker 1:You do this deal and then you just go and rest and rest. I don't think that's gonna happen.
Speaker 2:No. I can't I can't see
Speaker 1:it. Anyway It
Speaker 2:would be interesting too. I wonder, you know, would they continue to be able to invest, you know, independently? Or would they just, you know, or would there be kind of structure that that says like, you know, you have to you actually have to just go all in on this. If I was Zach, I would I would hope and expect for that. But who knows?
Speaker 1:Whatever they're working on, I'm sure they'll be using linear over there. Linear is a purpose built tool for planning and building products. Meet the system for modern software development, streamline issues, projects, and product road maps. And they got Linear for agents, folks. Dylan Patel is, doing a little meme on this.
Speaker 1:Zuck, founder mode master plan. Don't pay the PyTorch and LLM people enough. Lose 20% of the Torch people to thinky, thinking machines. Hire Alex Wang, Nat Nat Friedman for ten plus billion dollars to help you recruit talent. Inflection back the Torch people at 10 x their previous total comp.
Speaker 1:And so Dylan's obviously saying, like, you should've just bet on the same people earlier and kept them unclear how much of it was, you know, really about pay. But clearly, that is not a gating issue anymore. Yeah. The the floodgates have opened. The the This was something that was identified earlier.
Speaker 1:Covered a timeline post about this where it was like, how will Apple compete in a world where they can't justify paying anyone million a year, like, if that's the new normal or that's like the value of some of these people that are gonna do some of this research. You're gonna be kind of hamstrung. And it's not because you're not spending $10,000,000 on an organization, it's because you're not spending $10,000,000 on a person.
Speaker 4:Yeah. It's a crazy new thing.
Speaker 2:Yeah. Sam Altman was was taking shots at Meta.
Speaker 1:Yeah. That's on last week. Cover of the Financial Times today.
Speaker 2:You got it right here. Meta basically Tempting Yes. Sam came out and said, Meta started making these giant offers to a lot of people on our team like a $100,000,000 signing bonuses and more than that comp per year. Mhmm. I'm really happy that at least so far none of our best people have decided to take them up on that.
Speaker 1:Yeah. The the meta game in here is like wild. It's so good.
Speaker 2:Three d chess.
Speaker 1:Yeah. One of our
Speaker 2:best people. He's just getting into getting into
Speaker 1:the sucks head. Yeah. Yeah. Yeah. Yeah.
Speaker 1:Somebody had a good breakdown of
Speaker 2:that. He says the strategy of a ton of upfront guaranteed comp and and that being the reason you tell someone to join really the degree to which they're focusing on that and not the work and not the mission. I don't think that's going to set up a great culture. Altman added, I mean the only thing here is like tell that to the world, you know, the the world of like Wall Street and like hedge funds where like if somebody's just really good at making money, you'll just offer them like a maxed out contract to come over to your team and it's entirely, you know Yeah. Like motivated by
Speaker 1:The value that they're creating. But it's very trackable. It's a lot harder
Speaker 2:in Yeah. In that case.
Speaker 1:But still, you know, it it's clearly up there if you're moving the market cap. Like, you can kinda tell. Spore says, is Ilya SSI already DOA if its cofounder is potentially about to be poached? Good question. Swick says these guys are already centimillionaires, so we're not talking about a $100,000,000 signing bonus anymore.
Speaker 1:It's the first 1,000,000,000 signing bonus in history. This is going to cost. Zuck clearly is is in spend mode if you think about a 100,000,000 bonuses are high. This is a guy who lost 14,000,000,000 in 2022, 16,000,000,000 in 2023, 18,000,000,000 in 2024, and 20,000,000,000 in 2025 to invest in VR. All he has to do is cut VR spend for 2025, and he has more money than Anthropic has raised in its entire lifetime.
Speaker 1:Wow. I didn't put it in that terms. Never bet against Zuck long term, but I think we're in for another costly period of investment. And we know what happened last time he went so hard on a thing. We do not have the balls or imaginations to do what he is about to do.
Speaker 1:Yeah.
Speaker 2:Can you imagine being Tim Cook running Apple, $3,000,000,000,000 company making a paltry 74,600,000.0 in 2024? Just looking after just going through the most brutal year Yeah. Of his time Yeah. At Apple.
Speaker 1:You know like It's brutal.
Speaker 2:Pulling the company back from the brink of this trade war.
Speaker 1:He's sitting there fantastically.
Speaker 2:He's checking his pay stubs being like He's like I'm
Speaker 1:making who doesn't even work there. He just got paid out bigger than me.
Speaker 2:When you put it into context that that some 24 year old AI researcher who's cracked Yeah. And like deserves a great role at a great company with great pay. Making more than the CEO of Apple.
Speaker 1:Yeah. It's a rush. It's absolutely
Speaker 2:brutal. Anyway. We still need to organize this protest, hit the streets for Tim Cook.
Speaker 1:We do. We do.
Speaker 2:Head over to Cupertino.
Speaker 1:Yeah. We do. We should design some posters in Figma for it. Go to figma.com. Think bigger, build faster.
Speaker 1:Figma helps design and development teams build great products together.
Speaker 2:Nathan While all these companies are duking it out, Figma is powering Yeah. The design teams of all of
Speaker 1:them.
Speaker 2:Yes. It's kind of
Speaker 1:like So one hand washes the other scenario.
Speaker 2:We'd love
Speaker 1:to see it. Nathan says Zuckerman, Fonzong, and Friedman to lead AI for tens of billions of dollars was not on my bingo card yet. I don't think many people predicted anything like this happening. I think everyone was kind of saying like, there's probably going to be like some sort of v two of the llama strategy, but being so talent focused, I think was not on the table. It was more like, okay, maybe they'd do an acquisition of a foundation model lab or maybe they would just build an even bigger data center since they have abilities there.
Speaker 2:I think
Speaker 1:And it's been a very different
Speaker 2:We don't know much about sports but there's probably, I was trying to think is like Luca Doncic or whatever going from from Dallas to the Lakers. That was a big surprise.
Speaker 1:That was a surprise.
Speaker 2:SSI co founder going to Meta. Yeah.
Speaker 1:Is the Luke of of tech. Yeah. For people who know what reinforcement learning is. Exactly. Luke Metro chimes in, dog, how much is Zach paying?
Speaker 1:He's he's over at Anderol right now. The meme has been, do you wanna just sell ads or do you wanna build something important like Anderol? Well, with these pay packages, I think you're get some Andoroll engineered being like, I'm willing to sell ads. That's good. Like, I'm willing to optimize ads.
Speaker 2:Actually, I see a lot of ads throughout my day. I've always been kinda fascinated by them.
Speaker 1:You know, protecting the world and ensuring Western led peace. Creating world peace is noble, but at a certain dollar value, ads are cool too.
Speaker 2:Absolutely insane. Well, I'm excited to see this unfold. Mean Yeah. Some of It's interesting the way that this reporting is written. It it it feels at times like it's already happened, but it's clearly not confirmed.
Speaker 2:So
Speaker 1:Yeah. It could kinda go either way. But, I mean, we heard the leaks about Scale AI, like, like a few days, and and there was some speculation about what was going on there, and it became very real. And so, you know, who knows? Maybe it does become real, but we'll leave tracking it here.
Speaker 1:Nir Syan says, ladies and gentlemen, Midjourney has done it. It's a new AI image to video model. Justine Moore from injuries and Horowitz mentioned this yesterday, but the posts have been going out on the timeline. We have our intern Tyler Cosgrove in the studio today playing with Midjourney Video. How's it going so far?
Speaker 1:Can you give us a little review?
Speaker 5:Good. It's been a lot of fun so far.
Speaker 1:So you're in the mid journey Discord right now?
Speaker 5:Yes. Fantastic. I've actually made I've made four videos so far. Okay.
Speaker 4:If we
Speaker 1:can pull those up. Yeah. Let's see. I'm excited to see these.
Speaker 2:He's in the Discord.
Speaker 1:He's in the Discord.
Speaker 2:Live from the Discord.
Speaker 1:He's in the trenches. Is the how is the interface been? You just upload an image. Does it do the same thing that you get with a mid journey image where you type a prompt and then you get four images and you get to pick one?
Speaker 5:Yes. Yeah.
Speaker 1:Okay. So you get four video results.
Speaker 5:Four videos.
Speaker 4:They're five times long.
Speaker 7:And then
Speaker 1:you up res the one that you like basically? Yeah. Okay. Yeah.
Speaker 5:I think when you export it, it basically does that. Got it.
Speaker 1:Oh, okay. Here you kicked it off with an image of us reading the paper.
Speaker 2:Alright. See.
Speaker 5:This was, you know, kind of bear domestication, right?
Speaker 1:Okay. So so so did you include a prompt alongside the image? Yeah. Yes. Okay.
Speaker 5:You add an image and then you prompt it.
Speaker 2:Oh and then you add
Speaker 1:the prompt. Cool. Cool. It knows us too well.
Speaker 5:Okay. That was the first one.
Speaker 1:Okay. Let's see the second one.
Speaker 4:Love it.
Speaker 1:This is great.
Speaker 2:Bear domestication is in our future.
Speaker 1:Okay. This is us on our phones. And what is this?
Speaker 5:We have a kind of an angel flying over.
Speaker 1:Oh, the very, very bizarre. I thought I was thinking Pegasus.
Speaker 2:It's kind of
Speaker 1:has a
Speaker 2:bit of a demon vibe. Yeah. Who's the angel?
Speaker 5:In the back that
Speaker 1:What was the prompt for this one?
Speaker 5:Let's see. That one
Speaker 1:Angel wings or something?
Speaker 5:Angel flies up behind two men as they look back and smile.
Speaker 1:Okay. Yeah. Okay. The actual the actual video on us okay. What's this one?
Speaker 1:What's this
Speaker 2:one here? Uh-oh. This is
Speaker 1:us in the studio. Uh-oh.
Speaker 2:A little meta.
Speaker 1:Oh. That's extremely do not do Monet. This is super creepy. I don't like this. Boom.
Speaker 1:Bring
Speaker 2:the horses. Boom. Bring the air horn back.
Speaker 1:Oh, that's weird. They steal the gong. Wow.
Speaker 3:Is Surprised by Juddles.
Speaker 1:Hold your position. Okay.
Speaker 5:Okay. I think there's one more.
Speaker 1:Yeah. Let's play the last one. What you got? What's this one? That last one was bizarre.
Speaker 1:Very, very Yes.
Speaker 5:So the physics are pretty solid. Sometimes you you see a bit of the the same thing with v o three where, like, if a car is driving, right, you'll see the back of the cars.
Speaker 1:Okay. We got us standing at the pool. Let's take a look at this. Okay. This one's cool.
Speaker 1:Okay. We're back. The lighting on that's incredible. Okay. Finish strong.
Speaker 1:10 out of 10 for mid journey.
Speaker 2:Lighting again. There we go.
Speaker 1:I love this. This is great. That is cool. It looks really good too. Yeah.
Speaker 1:Always bring your f 35 joint strike fighter into the John was a cool Tomcat was it? Something like that. Okay that was cool. I like that one
Speaker 2:a lot.
Speaker 1:Yeah. A lot less demonic.
Speaker 2:Took us on a bit of a roller coaster there. Yeah. Did not like that angel.
Speaker 1:That was weird. That was. That was very weird.
Speaker 2:A little bit creepy.
Speaker 1:The aliens were very bizarre but you redeemed it all. You want it all back. Fantastic. Give us a review of the like overview of the actual experience. How long does it take to generate these?
Speaker 1:Are you hitting rate limits? How much does it cost? Give us, like, the breakdown of, like, you know, the consumer experience.
Speaker 5:Yeah. It's really good. I mean, I I think so. I'm on the $30 a month plan Okay. Which is kind of the mid tier one.
Speaker 2:But it's very fast. I mean,
Speaker 5:it takes probably ten seconds, fifteen seconds. It's
Speaker 4:really fast.
Speaker 1:VO three is like two
Speaker 5:So you can iterate like super quick.
Speaker 1:Oh, that's cool. Okay.
Speaker 5:But it's very easy to use. I mean, I I haven't used so I'm actually not on the Discord. I'm on the website.
Speaker 4:Okay.
Speaker 5:Yeah. But it's very easy to use. Yeah.
Speaker 1:Okay. Very cool. Very cool.
Speaker 5:And you can run them concurrently so I could do multiple
Speaker 1:Okay.
Speaker 5:At the same time.
Speaker 2:That's great.
Speaker 5:But, yeah, really great.
Speaker 1:Awesome. Well, very fun. We'll be tracking it more, asking people how it's benchmarking, how it's working. We'll have to get have some fun with those. I had a lot of fun with the VO three ones.
Speaker 1:We were doing the the crashing through the Hollywood Sign. A couple few too many bottles of Dom Perignon in the back of Ferrari.
Speaker 2:Yeah. I didn't like how there weren't guard where there seemingly weren't guardrails.
Speaker 1:AI safety issue. Yeah.
Speaker 2:It's an
Speaker 1:AI safety shouldn't just be able to pop
Speaker 2:to visualize yourself Drunk driving? Yeah. It's
Speaker 1:ridiculous. Bottles
Speaker 2:of champagne flying out
Speaker 1:of the The quality was remarkable. Anyway, Midjourney is having fun on X. They say introducing our v v v one video model. It's fun, easy, and beautiful, available at $10 a month. It's the first video model for everyone, and it's available now.
Speaker 2:How many how many prompts do you get for $10 a month?
Speaker 1:I don't know. You wanna look it up?
Speaker 5:Yeah. I I mean, I think it's actually unlimited. Okay. It just takes longer.
Speaker 1:Takes longer.
Speaker 2:Okay. But now that's cool. Verify that. That's insane when you put it into the context of those outputs. They're in many ways better Yeah.
Speaker 2:Or on par Yeah. At least, like, from an entertainment value standpoint as VO. Yep. And VO is $500 a month.
Speaker 1:$500
Speaker 2:still gated on
Speaker 1:I could only do three per day. I have to come back. They take two minutes a pop. Yeah. Speed of iteration is really, really key.
Speaker 1:So, I mean, that's the whole Discord model is, like, is, like, get people iterating, sharing ideas, like, to explore the space and figure out what works. Like, even just from seeing those four, I feel very confident about its ability to render aircraft. And so I'm probably not gonna go and prompt a bunch more alien videos, but I'll definitely be prompting a bunch more f 35 videos because it seems to do that really, really well. And so the more people you have making more stuff, the more you just you you learn the guardrails, learn how to use it creatively, and and can actually make a better product. But Midjourney was having some fun.
Speaker 1:Devin Fan from x AI says, I know what I'll be doing this weekend, and Midjourney says, what weekend? And Will DePeace says l m a o. Blake Robbins says, Midjourney video is breaking my brain, and everyone's having a good reaction to this. Always fun to have a new AI tool, and we'll be we're we're talking to a couple AI folks on the show today. So we'll be running through that, getting their reactions, and talking more stuff.
Speaker 1:Elon Musk posted the, the very sad what is this? The Peepo? Pepe or something? It's the green frog. He's smoking a cigarette.
Speaker 1:He's not happy, probably because RIP to ship
Speaker 2:36. Brutal.
Speaker 1:Ashley Vance, we should
Speaker 2:I I think a picture of Elon, you know smoking a heater after one of his rockets blow up, blew up would be become a timeless meme.
Speaker 1:That would be.
Speaker 2:It would be worth kind of his comms team kind of working on putting that together. Maybe working with Ashley Ashley Vance to get that shot. Yeah. He could live.
Speaker 1:So girls say, I can't believe he didn't cry at the Titanic. Do men even have feelings? Boys crying at the sight of ship 36 exploding. Very, very sad. And then Elon says, just a scratch.
Speaker 1:So the entire thing blew up. Was Just
Speaker 2:a flesh wound.
Speaker 1:It was intense watching it. I mean, the the ball of fire here is immense. So the Starship exploded during a test in Texas, a setback for Mars' Mars for Musk's Mars ambitions. Now the Mars transfer window is very, very tight. Like, you can only get from the Earth to Mars, like, once every eighteen months or something or maybe even more.
Speaker 1:It's really hard because, like, if the planets are on the opposite side of the solar system, like, you just can't, like, even though you have a rocket, you just can't get over there. So you have to wait until they're lined up, and then you can do it.
Speaker 3:But but realistically,
Speaker 1:issue? Realistically, true. Skill issue. If you build an even faster rocket, you could get there no matter what. Yeah.
Speaker 1:You just pilot steer it around like it's a GT three RS around the Nurburgring. No problem. So the explosion occurred during a static fire test. No injuries were reported. Thank goodness.
Speaker 1:We love autonomy. Very, very happy to hear that no one was injured during this because it looked horrific, and it looked like in any other scenario, there would be a bunch of technicians there, but but fortunately, were able to do everything remotely, which is great. And then Starship faces pressure to meet deadlines for NASA's moon mission and Mars exploration. So there's a big, there's a big NASA moon contract that's very important, very material to the business. Obviously, SpaceX has a lot of other business lines, but, this one's very, very important too.
Speaker 1:And it's, and we hope that they can get back on track. SpaceX is make making an enormous bet on Starship, which which stands roughly four four hundred feet tall at liftoff as it tries to break ground with new reusable rockets. And the the paradigm of of Starship, it's not just a big rocket. It is way more reusable. Like, you look at the thing, it comes down, gets caught by those arms, can instantly be refueled and sent back up.
Speaker 1:You're you're talking about potentially, like, multiple flights per day. And so the problem here is not, can you build a big rocket? Humanity has done that before. Humanity's built a rocket that's roughly on par. We've we've gotten to the moon before.
Speaker 1:The the challenge now is not, can we get to the moon? It's the same thing with, like the challenge is not, can we build a flying car? Or can we build we have helicopters. Can we build one humanoid robot or one self driving car in San Francisco? It's can we actually scale these systems to the point that it is safe to go to the moon and back on the drop of a hat for $200?
Speaker 1:Like, that's the challenge. It's it's more of an economic and industrial might challenge, and that's a completely different challenge from just, can we get one rocket to the moon? An exquisite system. We're looking for reusable scalable Yeah. You know Frequency.
Speaker 1:Engineering systems. So so good luck to Elon rebuilding and the entire SpaceX team. I'm sure it's a huge challenge right now. But if let's do some ads. Tell you about Adio.
Speaker 1:Customer relationship magic. Adio is the AI native CRM that builds, scales, grows your company to the next level. Get started for free. Adio.com.
Speaker 2:Adio.com.
Speaker 3:I like it.
Speaker 1:You can
Speaker 2:use
Speaker 1:Wait. What is that what is that sound
Speaker 2:at the end?
Speaker 1:Is that attached to that soundboard?
Speaker 2:Guys, I think you botched the action movie sound effect.
Speaker 1:So Oh, no. It has that.
Speaker 2:On that.
Speaker 1:Okay. In other news, the Los Angeles Lakers has been sold for 10,000,000,000 in richest deal in sports history. Guggenheim Partners CEO Mark Walter, who also owns me MLB's The Dodgers, is acquiring the storied NBA team in a move that makes it the world's most valuable sports franchise. And it's so funny because, like, the the Wall Street Journal's framing this is like, this is the biggest deal ever. No one's ever done a deal like this.
Speaker 1:And we're like, wait. So you're talking about, like, a series a for, like, a foundation model company, like, as a tech person? I'm just like, yeah. Like, a $10,000,000,000, like Billion dollars. I mean, we should ring the gong, but it's not exactly, like, the first time.
Speaker 1:It's not even the first time this show we've heard a deck of corn. Congratulations to to the Lakers.
Speaker 2:Mark Walter and
Speaker 1:the whole It's it's it's fantastic. Major premium to the Boston Celtics who sold for 6,100,000,000.0. And now the Lakers is the most valuable sports franchise. But they just don't do enough volume. There's only a couple games.
Speaker 1:You know? They're not twenty four seven. Like Instagram. Does that ever go offline? No.
Speaker 1:No. There's always entertainment. Lakers, they're still doing seasons. They need to have twenty four hour basketball. They wanna really get there.
Speaker 2:We're around the clock. It's like endurance endurance basketball. It's just a week long game. You know? Gotta always have five players on the court.
Speaker 2:Just constantly. Running up.
Speaker 1:It's the only option. Jeannie Buss and her family who have owned Angeles Lakers since Jerry Buss bought the team in 1979. Wow. On Wednesday, agreed to sell majority control of the storied team to Mark Walter, the sports investor. And I and I looked at the, the the return on investment of owning the Lakers for that forty years, slightly under S and P 500.
Speaker 1:Like, it was a really, really good deal, and it was a really great company that grew a lot, but it didn't outperform the stock market. Just diversification, bros. DCA, bros. Undefeated again. Well, if you're trying to DCA
Speaker 2:Well, the
Speaker 1:Do it on public.com. Investing for those who take it seriously. Multi asset investing, industry leading yields that are trusted by millions, folks. Anyway, Walter, is part of the ownership group that owns the Dodgers, has been part of the Lakers since 2021 when he purse purchased a 27% minority stake in the franchise. He's also a co owner of Chelsea in the English Premier League, the WNBA's Los Angeles Sparks, and the new newly formed Cadillac Formula One team.
Speaker 1:Let's hear it for Cadillac. Let's go.
Speaker 2:Let's hear it for Cadillac. Yes. Congratulations. John, you know,
Speaker 1:front at all.
Speaker 2:But John front run the Cadillac f one team and got a Cadillac for himself over there. You can see the black
Speaker 1:It's great to have an American f one team in the business now. Yeah. Yeah. We've we've fallen off but we're coming back.
Speaker 2:You're not gonna be able to get one of these in the whole country.
Speaker 1:I don't think so. They're gonna be too popular.
Speaker 2:F one team. Yeah. You know, it gets out on
Speaker 1:The sale marks the end of nearly a century of Lakers control by a family that has become synonymous with Los Angeles sports and the glitz of professional basketball. The deal also comes at a time of skyrocketing valuations in professional basketball, which haven't come back to earth since the league announced a media rights deal last year with worth 77,000,000,000 when the Celtics sold in March. The $6,100,000,000 valuation exceeded the previous records valuation set for a sports team, by the 6,050,000,000.00 sale of the NFL's Washington commanders in 2023. He purchased the Lakers for 67,000,000 in '79 1979. The team transformed from franchise uprooted from Minnesota into one of the winningest and most valuable
Speaker 2:sports properties. They were founded.
Speaker 1:That's where the lake the lake name comes from. Interesting. Minnesota is the land of a thousand lakes. They were the Lakers because there's a lot of lakes in Minnesota. And then they just put them to they they just brought them to LA and kept the name.
Speaker 1:But that's what Lakers means. Yeah.
Speaker 4:Wow.
Speaker 1:Bus the Bus family oversaw the creation of Showtime and presided over the NBA's last three peat. A listers like Jack Nicholson and Leonardo DiCaprio have become fixtures at the games. And when they sell merch, they need to pay sales tax. They should get on numeral.com. Numeralhq.com.
Speaker 1:John. Sales tax on autopilot. Spend less than five minutes per month on sales tax compliance. You know all the time.
Speaker 2:11 champions hips since 1980. Their rosters have boasted many of basketball's brightest stars. Magic Johnson, Kareem Abdul Jabbar, Kobe Bryant, Shaquille O'Neal, and LeBron James. LeBron LeBron LeBron James's son have all worn the Lakers purple and gold.
Speaker 1:I love it. Such a cool Yeah, the father son duo. I mean, I feel like that should have been a bigger national news story. It's such a cool thing. Think it's not like if they were winning championships together immediately, that might be a different story.
Speaker 1:But it's just so insane that you could be playing professional basketball with
Speaker 2:your son. You could have earned a better return by D. C. Yang into the the into the stock market.
Speaker 1:That's not why people own these assets, though.
Speaker 2:Owning the Lakers for a number of decades, I imagine, was absolutely priceless. So great investment.
Speaker 1:Get the owner's
Speaker 2:box. Great run.
Speaker 1:Yeah. All the perks, you have to add those in. What do you get perks from DCA ing into the SCA?
Speaker 2:Like how Lakers legend Magic Johnson hit the timeline. Said, just like I thought when the Celtics sold for 6 b, I knew the Lakers were worth 10 b.
Speaker 1:Let's go. The confidence of Magic Johnson. Great investor too. He's got a bunch of bunch of good stuff in the portfolio. Anyway, more news on the Scale AI transaction.
Speaker 1:So it's closed. I believe that Alex Wong has a badge at Meta and shows up to work at at in Palo Alto and and clocks in at at Meta HQ now. Scale AI is still an ongoing concern. It's still a company. But every competitor is out for blood, and they want to take as much of the business as they can since, obviously, the perception is that Scale AI will primarily be working with Meta and that other Foundation Model Labs might not want to do business with with Scale AI anymore.
Speaker 1:Unclear if they can separate out the businesses. If they can separate them out fully over time and and sell the position to other investors, create, like, a diversified I mean, even they could even take the company public, at which point, I imagine that it would be a lot less a lot less of a conflict of interest or like a fear. But there's been news that that OpenAI said, hey. We're not training we're we're we're not using Scale AI for data anymore because it's too aligned with our competitor, LAMA, maybe. But everyone's trying to Yeah.
Speaker 2:A lot of this is very predictable. Yeah. Right? I don't think Meta and Scales teams looked at and said, hey, if we sell right now to Meta, is competing in open source AI
Speaker 1:Mhmm.
Speaker 2:We're totally gonna retain all of our customers, right? Yeah. Like people aren't just gonna immediately churn off.
Speaker 1:Yeah.
Speaker 2:And no, they they were smart enough to know what would happen. And there was an article I think yesterday about open AI, you know, ending their relationship with scale. From from what we knew, like they hadn't been doing much.
Speaker 1:Yep.
Speaker 2:For a while.
Speaker 1:Yep. That's part
Speaker 2:of the reason why Mercor had been
Speaker 1:And they also brought a big function in in house. Because for some of the more complex tasks, makes sense to generate the reinforcement learning data yourself. Yep. And there's just so many others there's so many other services having, like, a single point of failure. Never makes sense for a business of that size, but
Speaker 2:Yeah.
Speaker 1:We'll see. So the the information has an article here about a little known startup that has surged, hint hint, past Scale AI without any investors. This is interesting. After Meta Platform Scale AI deal, data labeling is looking like Silicon Valley's hottest new interest. That's enormous opportunity for Edwin Chen's Surge AI.
Speaker 1:For years, data labeling existed in a tucked away corner of Silicon Valley, a critical but unglamorous area of AI where companies like Google and OpenAI hire outside firms to improve their models by laboriously grading the quality of what they produce. Now a spotlight has unexpectedly fallen onto the field in the wake of Meta Platform's decision to pay 14,300,000,000.0 for 49% of Scaleai, the best known data labeling firm. But it's not the largest such firm nor perhaps the most impressive. That title belongs to Surge AI founded by Edwin Chen. This is fascinating.
Speaker 1:I didn't know this. 1,000,000,000 in sales last year. Bigger than scale. Yeah. So Chen startup has won customers like Google, OpenAI, and Anthropic.
Speaker 2:It's a testament to the idea that like, sure you can bootstrap, but you it's it's so incredibly hard to have any hype around your business if you're bootstrapped Totally. Because you're not having your investors aren't hitting the timeline for you Yep. On a daily basis. And also you have if you're not trying to raise capital, you have less need to go and be loud and and go on podcasts and and talk to the press and all this stuff because you're just making a lot of money. And and, you know, sometimes it can be beneficial for to for people to not know
Speaker 1:about you. So this is I mean, this is crazy, crazy stats. So Chen is 37. He has no investors and has bootstrapped the five year old startup entirely by himself, which has a 110 employees in offices in New York and San Francisco. The company generated more than $1,000,000,000 in revenue last year.
Speaker 1:Surge has told employees, a previously reported figure, that exceeds the $870,000,000 scale generated in revenue during the same time period. And unlike scale, surge was profitable and has been from the beginning, Chen said. Moreover, surge could see its sales get even larger if other companies copy OpenAI's decision to stop hiring scale, a choice made over concerns about scale's relationship with Meta to shift business to Surge. Other key financial metrics couldn't be learned, like how much revenue Surge keeps after paying its workforce of mostly contractors. So there is a question about, like, the margin since this is somewhat of a marketplace business.
Speaker 1:This could be a situation where, you know, a a thousand dollar contract comes in, and $800 of that contract goes to the actual contractor who's doing the work of the data labeling. But at the same time, even if it's 200,000,000 in, like, you know, like, net revenue, that's still a huge business. I I it's hard to imagine Surge not being a fantastic business if they haven't had to raise money. They have a 110 employees, and they're used by Google and all these major foundation model apps. So, seems like a fantastic business.
Speaker 1:But if Serge could earn evaluation from investors similar to the one Scale received from Meta, such a price would make Chen a billionaire many times over, at least on paper, and quietly one of the wealthiest people in tech. Interesting. I'm very interested to see what, what he did before this company. Edwin Chen. I feel like I've heard that name before, but I don't know.
Speaker 1:As AI models transform from toys into real business tools, data labeling is becoming more and more essential. Contractors hired by, like, by companies like Surge grade the responses from AI models and write thousands of questions and answers in fields like programming, math, and law to feed those AI models. And so, you know, if you're I I I wonder if this is gonna go the route of, you know, you are Deloitte or McKinsey, and you're going to have your team, but then also a company like Surge, create a ton of training data around a specific workflow that is costing your business, you know, 20 or 50 or a $100,000,000 every year. And then, so it's like instead of like the AI BDR that's like kind of generically writing emails based on like the average of the entire internet. It's like, no, this is a fine tune for your business, perfectly trained, perfectly, and it really distills what you do excellently.
Speaker 1:Yeah. I don't know. I don't know if it'll go that way. I'm I'm interested to talk to people about it. As AI models so Surge's subsidiary data annotation text says workers get paid to train AI on your own schedule with wages starting at $20 an hour.
Speaker 1:Chen has distinguished Surge by making, it the high end shop charging premium rates often two to five times what Scale might bill. Surge justifies the prices with its reputation for industry leading work. Indeed, one former Scale employee said, Serge often performed better than scaling customer audits of labeling quality and competitor Garrett Lord, who's coming on the show today, who runs Kleiner Perkins backed Handshake readily acknowledged that Chen is the number one player. So I'm excited to talk to Garrett Lord today about this exact topic. Should be very interesting.
Speaker 1:You wouldn't know that from the from the coverage of Meta's blockbuster deal to quasi acquire Scale AI. Its CEO Alexander Wang, who is now joining Meta in a senior AI role, was widely regarded as leader of the data labeling field and had become a Silicon Valley celebrity, blanketing podcasts and conferences with his presence and posting heavily on X. It also raised 1,500,000,000.0 in venture capital, putting Scale on a very short list of companies that have raised that much, and he hired upwards of a thousand people. Wong had made time to exit perfectly given the traction of Surge, which had grown larger than Scale without outside capital and with a tiny fraction of Scale's workforce. Scale also missed the goal to hit a billion dollars in revenue last year, but Scale Scale spokesperson said the company
Speaker 2:stood behind Scale wasn't profitable, either.
Speaker 1:It was not profitable.
Speaker 2:Which
Speaker 1:But wasn't burning a ton of money. Yeah. I think they had They were efficient. They raised $1,500,000,000 and they still had almost $1,000,000,000 in cash.
Speaker 2:Yeah.
Speaker 1:So they weren't in trouble or anything, but at the same time it was a wildly profitable, not a wildly lean business. But I don't know.
Speaker 2:What a it's absolutely fascinating
Speaker 1:to comp these two It's a wild industry. Something that like, yeah, I mean, just it feels like there's such an edge just to even identifying this opportunity years and years ago. I mean, I guess search started four or five years ago, but it was certainly like pre chat GBT that all these companies got started. And then they realized like some of them got started in self driving car annotation, all sorts of stuff like that. But Chen studied linguistics and math at MIT, came to the idea for his startup after leaving college and witnessing firsthand how big companies struggle with data.
Speaker 1:Before starting surge, Chen worked as machine learning engineer at Facebook, Dropbox, Google, and Twitter. He worked in four different tech companies. I can't just like going from one to the next. That's insane. He was developing recommendation and search algorithms and helping gather the data needed to train them.
Speaker 1:Despite the hefty resources of those companies, Chen encountered a lot of problems. At Facebook, for instance, Chen was tasked with helping build a Yelp competitor. His team needed to train a model that correct could correctly classify businesses, telling the difference between restaurants and grocery stores, for instance. To do so, they needed a dataset containing 50,000 accurately labeled businesses, which he found out would take six months for an outside firm to assemble. We had no solution other than waiting.
Speaker 1:We simply waited. When the data came back, Chen blanched. In some instances, it had labeled restaurants as coffee shops and coffee shops as hospitals. The data was complete junk. He wouldn't say which vendor Facebook had used.
Speaker 1:In 2020 in 2020, he left Twitter to found Serge and picked up some of his first customers, executives from Airbnb's and and Neva, a once promising AI search engine startup, as only, as only a founder in San Francisco might bumping into them at rock climbing gyms in the city's Dogpatch neighborhood and the Mission District, talking up his startup. To get Serge going, Chen recruited data labeling contractors he knew from his previous roles and funded the startup using his savings. He wouldn't say how much he put in. Fortuitously, Chen focused on language modeling. Scale by contrast started out using more visual data for autonomous vehicles, which we talked about.
Speaker 1:Wow. Just as those types of models began to grow in importance. Less than a year later, OpenAI had hired Serge to fine tune its models by teaching them how to avoid producing harmful responses, like a racially biased language, biased based on research paper the company published together in by 2022, Anthropic can become a search customer.
Speaker 2:They're putting out research papers Yep. With OpenAI and still managed to stay this under the radar.
Speaker 1:Wow. Yeah. So look at this, the label largesse. Data labeling has proved to be a lucrative niche in AI. Surge founded in 2020 has over a billion in revenue, zero funding.
Speaker 1:Scale founded in 2016 has a 100 and 870,000,000 in 2024 raised oh, the this this says funding raised, but this is clearly valuation or something because it says 17,400,000,000.0, which is not what they raised. Touring has 300,000,000 annualized, raised $225,000,000. Invisible It's
Speaker 2:interesting. Touring too initially was like a marketplace to just hire developers, and I think they pivoted into
Speaker 1:Data labeling.
Speaker 2:Data labeling.
Speaker 1:Interesting. It's the same thing when I work with a cloud provider. The enterprise tech customer said, I don't know the internal expectations for why their services work so well. I push a button, and I'm glad for the internal work to make that happen. Data labeling companies typically use various techniques to make sure contractors aren't just dialing it in or phoning it in, I guess, when answering questions.
Speaker 1:For instance, companies randomly insert questions that have no correct answers or make sure labelers agree on the right answer to a question. So obviously you scaffold up these these, these these, like, responses so that everything's, like, double checked, and then you can kind of see if people are are messing around. But wow. What what a beast of a business. I have no idea how big this thing is.
Speaker 1:Amazing. How'd you sleep last night? I'm on a comeback. I got an 89. Go to 8sleep.com.
Speaker 2:No. Get an Eight Sleep. I know what I got.
Speaker 1:Five year warranty, thirty night risk free trial, free returns, free shipping. What'd you get? You got a 100? 88? Let's go.
Speaker 1:Soundboard. I demand
Speaker 2:What what
Speaker 1:do you to Action Hall. Let's go.
Speaker 2:John did it.
Speaker 1:They said beach in my life.
Speaker 2:Never beat me.
Speaker 1:Night. I also took a nap after I got home. It was fantastic. We have our next guest coming into the studio. Mike from Arc AGI breaking down how all the different models are doing.
Speaker 2:Last time Yep. He was supposed to come on, Elon.
Speaker 1:Yeah. I
Speaker 2:know. So annoying. Trump decided to get in a massive timeline war.
Speaker 1:It was brutal.
Speaker 2:John wanted to power through it.
Speaker 1:I
Speaker 2:said, the people in the YouTube chat are are
Speaker 1:Demanding this.
Speaker 2:Demand that we do a timeline
Speaker 1:If we don't do it, they will put up attack ads against us. They will buy billboards against us. They will go to adquick.com and put up attack ads on TBPN if we don't cover the Trump Elon dust up. So we did. And now we're getting those folks back on the show today and later.
Speaker 1:But if you wanna take out an attack ad on us, buy a billboard and Adquick. Out of home advertising made easy and measurable. Say goodbye to the headaches of out of home advertising. Only Adquick combines technology, out of home expertise, and data to enable efficient, seamless ad buying across the globe. We should do some timeline.
Speaker 1:I I wanna do this Blank Street story, but we can do that later. Let's dig through some, what's in the timeline while we while we wait for Mike to join. Oh, we have Mike in the studio now. Welcome to the show. Mike, good to see you.
Speaker 1:How you doing?
Speaker 3:Good. Thank
Speaker 1:you so much for
Speaker 3:I'm glad there wasn't a major breaking drama story today. It's actually a little show up.
Speaker 1:Yes. Yes.
Speaker 2:You you I I don't know if you watched that show at all, but like I was just sitting here. John was so locked in, wanted to just keep doing the show. I'm messaging him like, no. We we like
Speaker 1:actually have
Speaker 3:to terrible day to launch anything new. We we like launched something. I saw several startups launch stuff. Yeah. Like, regrets to everyone who tried to get anything out that day.
Speaker 1:Yeah. Rahul. No. Actually, I remember them. It was it was Rahul.
Speaker 1:And
Speaker 3:Yeah. That's right. Julius had a launch.
Speaker 1:Yep. And, what what what what's the voice cloning company? Eleven Labs, I think they launched them.
Speaker 2:Well, and then and then I think Lulu said something. She was like, if you have bad news, today would be
Speaker 1:a good day
Speaker 2:to drop it. And then OpenAI actually flagged, like, hey. We had this, like, massive,
Speaker 8:you know
Speaker 1:Oh, yeah. This this dust up with the government. Right? Where the government was like, you have to You have to all chats.
Speaker 2:Ever. Even if they're private. Do this. Yeah. I mean,
Speaker 3:that was, like, serious. Like, I I mean, that got we're still looking at it actually, you know, end results of that. But that that went really deep into the world, I feel like, much more than, you know, kinda maybe even got reported on. Like Yep. Every single chat thread I was a part of was basically like, should I, like, stop using ChatGPT as much?
Speaker 1:Yeah. This is What what was new to ChatGPT because it feels like Anthropic has a similar policy. I it seems like Google might have a similar policy. Like, there was that story a a year ago about a a man who was using a Google phone with a Google Fi, cellular connection and had all of his data stored in Google, Google Drive and Gmail, and he took a picture of his of his child to send to a doctor, and it was kind of like a nude photo of the kid to inspect the child for, like, a physical medical problem. And it got flagged as child abuse material by an automatic system, and the automatic system basically deplatformed him from everything Google.
Speaker 1:And so he lost his email, his phone number, his all of his drive stuff. And it was like a false positive, but it was really hard for him to get back through there. And so I guess my question is, like like, it seems bad when we hear the story in isolation, but maybe the problem is not the individual company, and it's instead, like, the government policy, and this applies to all the different companies. But I
Speaker 2:don't what Two two things can be true. One is that it can be a massive overreach by that court to say, basically, you need to eliminate privacy Yeah. On your platform.
Speaker 1:Yeah. Yeah.
Speaker 2:You can simultaneously have questions around maybe I should use this product in a different way.
Speaker 1:Totally.
Speaker 2:And it's the the inflammatory nature of it is that people use chat GPT as like a confidant that Totally. Yeah. And tell it things that they wouldn't tell anyone. Yep. They wouldn't tell anyone in their life and they're and they're having those conversations and I Yeah.
Speaker 3:I think that's why it's such struck such a chord because like that's true. I I just saw some reporting from Kotu this week that, like, chat speed minutes per day are up like, thirty minutes a day now at usage. Yeah. And, like, it's not it's, like, closing the gap with, like, Instagram, which is just sort of nuts to think that, like I mean, who would have thought a productivity tool would ever, like, be on par with, like, a social, like, media app, Right? In terms of daily usage.
Speaker 1:It's so
Speaker 2:fun. But the interesting thing is it it's filling a similar void, know. It's like it it's delivering digital companionship in maybe the way that media products historically did without any social element to it at all. Just like the one one to one. It's interesting to think like, you know, we went from like, you know, what your friends Kardashians are doing is like the most interesting thing.
Speaker 2:Mhmm. To like, actually maybe the most interesting thing is like this person that knows everything about you and is always on and always willing to talk and you know, you know, who knows?
Speaker 3:Yeah. I think the consumer habits are definitely being formed around stuff today.
Speaker 1:Yeah. Yeah. I mean, I I find myself all the time, like, instead of scrolling YouTube looking for an inform like, an interesting video essay to explain how, I don't know, like, global shipping lanes work or something like that. Just going to ChatGPT and saying, hey. Like, break this down for me, and then I can just ask a follow-up and dive exactly to the layer that I want.
Speaker 1:And so, yeah, I'm definitely in that camp of using ChatGPT just as like an exploration and entertainment education tool, an infotainment tool much more than Instagram right now, at least for me. But enough about that. What is new in your world? How should we how should we frame kind of the the current horse race between all the Foundation Labs?
Speaker 3:Yeah. Okay. So, I'm gonna share with a link. I don't know if this is something y'all can pull up
Speaker 2:right Yeah. Yeah. We can
Speaker 1:pull it up.
Speaker 3:So this is a post that we published a little over a week ago. So, you know, I think there's been this really big like, what's the frontier right now in sort of AI progress? Right? The the the massive shift in the last six to nine months has been moving from this regime of, like, scaling up pretraining with more and more labeled data into these, like, test time compute test time compute, test time adaptation regime. People call these data reasoning models.
Speaker 3:Right? We're getting these models tend to think out loud additional data. Every major lab now pretty much at this point, I I guess, for Meta, has one of these systems that we've been able to test and and report results on. And I think there's some really, really interesting stuff we're we're starting to see. I I think the most notable thing is that, like, there's not an absolute clear winner across the sort of, like, landscape right now.
Speaker 3:There's basically a a sort of Pareto frontier that's emerged. One of the most important things if if listeners are, like listening here, think you should take away is that, like, any any anybody who gives you a benchmark score on an AI system that is a single number is is is is just marketing to you.
Speaker 4:Mhmm.
Speaker 3:Because the reality is now with these AI reasoning systems, you have to report score on, like, a two dimensional. You have to consider cost and efficiency alongside the accuracy. And all these different lab providers have come out with different AI reasoning systems that sort of score differently. They're trading off cost for accuracy at a different point. So, like, if you want just, like, the absolute highest horse you know, highest raw horsepower, sort of cost and time is no option, o three high is gonna be your, like, clear winner today for that.
Speaker 2:Yeah.
Speaker 3:But if you're somebody who's saying, like, you know, hey. I wanna plug in an AI reasoning system into an existing product I have where I want, like, faster answers, I'm willing to sacrifice some raw horsepower for generality for, like, quicker response times, lower lower cost, you might look at something like Brock or Gemini two point four two point five pro thinking. There there's not, like, a single, like, best best answer, which which I think is pretty interesting, and and we haven't seen like, this sort of frontier is, I think, what all the labs are sort of working to to try and figure out, okay. How can we get accuracy as high as we can, but also we gotta try and keep costs as low as we can down on the human efficiency routes.
Speaker 2:Mhmm.
Speaker 1:Yeah. I've noticed that more recently with it's kind of my default usage in ChatGPT. Four o seems super fast, but I always am thinking like, oh, I should maybe put this in o three pro, but do I wanna wait ten minutes? And and I'm making that kind of like economic calculus there even though Yeah. Because I'm on the pro the pro plan, it's not an economic cost, not a direct.
Speaker 1:I'm hitting a $5 It's
Speaker 3:time for you.
Speaker 1:Right? Like,
Speaker 3:Wait thirteen minutes.
Speaker 1:Yeah. Exactly. And so I'm kind of like doing a four o thing over here and then switching back and forth. It's very it's very odd paradigm that I we never really had to deal with in computing necessarily before. I mean, I guess, like, if you were downloading, like, the four k illegal Blu ray versus the Yeah.
Speaker 1:Yeah. I'm willing to ask you a couple hours. Of course, never did purely hypothetically, but Yep. If you're on a Torrent site, there was a time trade off between watching a screener. Well, I think this is actually one of
Speaker 3:the reasons these AR reasoning systems, I would assert. And I I don't have insight to baseball in the data. But, like, from the outside looking, I think there are some interesting suspicions that would suggest that these, like, AR reasoning systems, at least today in their current form, have, like, relatively weaker product market fit compared to the, like, non a the non reasoning based systems. Right? The pure language model based based things.
Speaker 1:Interesting. That's a huge violation of the the, like, the DeepSeek narrative that I felt like was really bubbling up was DeepSeek came out with, like, the first just, like, open access reasoning model. Like, reasoning had been tucked behind the OpenAI paywall. And so the pro users were familiar with what reasoning models could do. Everyone was very excited about them in tech or in the early adopter crowd.
Speaker 1:But but DeepSeek, when that app came out and you could just install it and instantly see the reasoning chain Yep. It felt like everyone's like, oh, everyone's gonna be addicted to this forever and this is gonna be the new paradigm. But it seems like that might not necessarily be happening. Georgia, do have something? Or I wanted to talk about, like, spiky intelligence and how that plays into this.
Speaker 1:We had this, someone came on and said, like I think it might have been Scholto actually talking about RKGI, just saying, like, hey. All the Foundation Labs kind of have, like, a truce that we won't reinforcement learn specifically against Arc. I
Speaker 2:don't
Speaker 1:know how real that is from your perspective, but it feels like increasingly, we might see, like, very tact task dependent RL runs kind of chipping away at specific things like like IMO level math is something that clearly like there's a ton of work to be done on, but we don't have as many verifiable rewards for for poetry or comedy writing. And so that'll be a little bit messier and later down the road maybe. But at the same time, there's probably other verifiable rewards that are just smaller pockets of value here and there for these little micro tasks. And so I'm wondering if we will ever see, like, the marketing language around these models evolve. Like, Grock kind of did this with, like, we are the anti woke one, but that was more just in the overall, like, temperature or the vibe of the model.
Speaker 1:Sure. But I'm wondering if there if there'll be an idea of, like, this one's really good at math. This one's really good at research. This one's really good at that. Or if or if they're all kind of going down the same path with what they're trying to solve.
Speaker 3:I I do think you probably are gonna see some domain specialization. I think my guess over the next twelve to twenty four months is that you'd see some domain specialization on benchmark scores diverge because of how all these labs are starting to do the next evolution of training Mhmm. Which is they're using our own environments to generate synthetic CoT traces, doing their sort of model trainings on that data, and they're trying to go get it on a lot of diff just different domains. Yeah. You know, the o three the original o three paper, you know, I think it was interesting on benchmark results where, you know, on this new sort of CoT reasoning system, they had relatively high scores on math and coding, but the the but the the the gap or I should say, the step function increase in those scores, that was much higher than the increase in, like, legal reasoning Mhmm.
Speaker 3:Which you would sort of maybe intuitively guess that or think or suggest or expect that, like, legal reasoning would probably be one of the best, like, general domains for if you trained a reasoning model that was really good at math and coding. Like, it should be, like and it's a language model that, like, that would directly transfer into, like, the legal domain because, like, okay. It's symbolic re you know, reasoning that's, like, self consistent. And that wasn't the case. So I think that's I I suspect that's what we'll see there.
Speaker 3:You know, there there's obviously the big scale news. Yeah. The thing that I'm seeing now is there's probably, like, I don't know, at least a handful that I know of these new startups and that have come come up in the last several months, but all are getting founded to basically go build RL environments to generate synthetic or semi synthetic data, and, like, selling them to to sort of the major labs or to the major frontier folks building these next gen systems. I I think we're gonna see more of that. Expect that's kind of where what's gonna a lot
Speaker 7:of areas. What do
Speaker 2:what do you even more What what does the data labeling market look like today? We are covering surge AI, which a lot of people weren't familiar with. I I I'm sure I'd seen it at some point, but I was certainly not familiar with it until we covered it today. What do you think the label labeling market looks like in in five years? Do you think that scale was getting out kind of at the perfect time?
Speaker 2:You know, I'm I'm curious.
Speaker 3:I I think the timing was pretty good. I mean, like, look. The the macro change here is is from a regime where, like, we're pre we're scaling up pretraining. Want as much text, as much high quality label text as we can get our hands on to to scale these these foundational models into one where we're trying to train process models or, you know, make the foundation models really good for process thinking and COT generation. Mhmm.
Speaker 3:That that is a complete shift in how you wanna generate that data. You want an oral environment. You can create lots and lots of COT traces, really very long traces over long running tasks as well, and you can feed all that right back in. And then take advantage of the scaling laws we already know about language models and how they how they work and how the performance increases as you can get more examples of of the data there. Mhmm.
Speaker 3:So, you know, my my, like, I guess, like, macro bet would be, you know, sort of the trend is heading down towards, like, the pretraining scalable stuff. Yes. So Significantly higher on RL environments.
Speaker 1:Yeah. So so when when when you say RL environments, what you're talking about is moving from a paradigm of, I go to a data labeling company, and they hire a ton of contractors to generate new text or verify the responses or grade the responses from these models. Two, I am now hiring top machine learning engineers, AI scientists, and having them design an environment that the that the reinforcement learning can happen, like, autonomously within the system. Right? Or we're we're effectively, like, these new startups that you mentioned, they are taking the massive, like, hundreds of thousands of contractors, like, out of the loop for the next runs.
Speaker 1:Is that correct?
Speaker 3:Yeah. It's synthetic or semi synthetic in some cases. Yeah. They're like example companies here, like Mechanized Work is one that got started recently that's doing this stuff. More Morph, I think, is doing this stuff.
Speaker 3:Habitat's another world environment that's sort of doing similar stuff. Sure. There there's there's just a lot in it. It's, like, very emergent. All these many of these got founded in the last, like, couple months.
Speaker 2:Yeah. Yeah. Yeah.
Speaker 3:And I think that's a function of the demand and the pull from a lot of the frontier research groups that are wanting this data Yeah. To sort of do their all training stuff.
Speaker 2:So so do you do you imagine companies like Surge and other players would try to pivot into this if if they're expecting less
Speaker 3:I would expect that founder led companies like that would recognize this as current part of this and have bets in it if they don't already.
Speaker 1:Yeah. I I was always wondering, like, the the whole story of scale was kind of a series of, like, various booms in training data. Like, the first one was was data labeling for autonomous vehicles, and it seemed like that grew very, very quickly. And then the the training paradigm around Waymo's kind of shifted away from, hey. We need more and more training labeled training data to something else and just having the cars on the road and and Yep.
Speaker 1:And generating real world data from that. Then there was, like, the second era of, like, the the pre training generating the data for RLHF and the big boom there. OpenAI and Meta, both big customers throughout that cycle. And then there was kind of a question of, like, what's the third act for all of this? And was wondering if the is it possible that there is a third act, but it's just something like humanoid robots or something like that?
Speaker 1:Like, put a bunch of people in mocap suits and generate a ton of training data for Yeah. What it means to pick up a soda 25 times in a row. It'd be a very different, like, training data product. But at the same time, like, we have mocap suits, and maybe that's relevant or maybe that's ridiculous to think. I don't know.
Speaker 1:What do think about that?
Speaker 3:I mean, so, you could one definition of intelligence is that information conversion ratio from the amount of information you have to an action policy decision. Yeah. There's the the intuition here is you can make a a perfect decision given a set of data or information that you have. And oftentimes, the right thing to do is go collect new data. And so once we actually start, like, peeking out on, like, intelligence capabilities, you know, either plateaued because of research or, like, plateaued because we've actually got, like, AI that's close to AGI Mhmm.
Speaker 3:The limiting factor then becomes, like, the ability to acquire new new information, new data. And so in synthetic, like, we're on the Internet, that's gonna be a function of, like, you know, in sort of software bits world.
Speaker 2:Mhmm.
Speaker 3:And then the one beyond that is gonna be, well, how do you it's literally gonna make contact with, like, reality, the universe.
Speaker 1:Yeah.
Speaker 3:And that's your that's your feedback mechanism to get new information into the system so you can, like, increase your overall intelligence.
Speaker 2:Yeah. What what do you what do you think? We had George Hots on the show Mhmm. A few days ago and he was talking about this efficiency problem where like if you took all of the if you took all of the conversations that I had ever had and you transcribe them, it would be like a few megabytes of data. And I'm able to generate some level of intelligence you know, based on that and and
Speaker 1:have The golden retrievers level.
Speaker 2:Yeah. A golden level of intelligence of a golden retriever. Yet an LLM needs you know, effectively like you know, terabytes of data.
Speaker 3:The sample efficiency is very low.
Speaker 2:I mean, this is a
Speaker 3:true statement about just the paradigm deep learning compared to program synthesis is the bet that Mhmm. That we're making at India. India program synthesis is a regime that's much more sample efficient, ordinary models that can that can generalize out of distribution. But I think it's completely statement. It's like, it's a very damning statement that, like, we've got AI today that's trained on some colossus of, like, all of humanity's knowledge and text, right, over the last, like, five thousand years.
Speaker 3:It's on the Internet. And, like, what have what new ideas have have have they produced? And, you know, may maybe I could point to, like, AlphaEvolve, which I think is very, very impressive frontier AI system. You know? And it's legitimately finding new knowledge.
Speaker 3:It's creating new ideas, you know, ver verifiably. But but they're very small, and they're on the margins of things that we kind of, like, already have been doing. Right? Matrix multiplications, things like this, or in kind of in the regime of things that we we kind of know about and can define and spec out for these systems. Whereas, like, if I took either of you guys and I gave you somehow the superhuman capability to have, like, all of humanity's knowledge in your head at the same time, like, I think you'd probably be able to produce at least one idea, like, connect two random, you know, divergent domains.
Speaker 3:They're like, oh, hey. This kinda looks like this. Shit. What what here's a new thought. Right?
Speaker 1:Yeah. Totally.
Speaker 3:That still feels like something that I I think I mean, it's very exciting to build towards. I think this what we all want. Right? That is that is like EGI is capable of invention and discovery. That that will actually increase the rate of, like, scientific frontier innovation, but, we don't have that yet.
Speaker 2:Switching gears a little bit. Five years from now, do you think the average American will pay, for an LLM subscription?
Speaker 3:I think that the think the cost is probably gonna go down far enough where that just gets built into the subsidy of whatever the product is, and the revenue stream is attached somewhere else. I haven't thought deeply about it, but that's, like, my off the top of my head thinking.
Speaker 2:Yeah. Yeah. We we we were talking offline this morning about just this dynamic of, like, the average American will actually churn from HBO Max because at that moment in time, like they, and it's $20 a month or some whatever the fee At that moment of time, there's not a show that they really love. So like, yes, there's a lot of like value there, but like they're just like, yeah, I'm just, yeah, but, like, I I don't
Speaker 3:I mean, I still know so many people that don't even have subs that are like like my my my my wife, like, use CatchPal all the time, but she's on the free plan. Yeah. And that's, like, enough to get a lot of value for what for what she does. I I sort I sort of respect that, you know, yes, they're gonna be they're gonna be fire users, and those are gonna be the folks who really are doing, like, amazing, powerful things and stuff. I suspect the base rate's gonna go down enough where, you know, it's gonna be more embedded across, like, almost all the products and experience you have as opposed to being, like, you know, a dedicated thing or paying a lot of, like, one off cash for.
Speaker 3:Yeah. Yeah. Now the other the other you might buy products that need it, like robotics or, you know, things where intelligence is built into. Think you're gonna need product categories that emerge, and, like, people go buy those. But, like, paying a subscription itself, I'm not as confident on.
Speaker 2:Yeah. The other thing that stood out to me today specifically was Midjourney came out with their new video model. It's very good. It's $10 a month for effectively unlimited prompts and you comp that to Google's v o three, which is $500 a month and you're still heavily gated. Yeah.
Speaker 2:Mhmm. And it just seems obvious that that
Speaker 3:Five years may not be the right timeline for that prediction, by way. Might be longer than that.
Speaker 1:Sure. Sure.
Speaker 3:Yeah. Yeah. Like, there there's so much use case diffusion. Like, one of the things we're seeing with Zapier AI, which is growing growing quite quite strongly right now. It's on the exponential growth path for the AI usage and AI AI's apps.
Speaker 3:I I I've looked at this, and I've been wondering, is this a function of the technology getting better or the use case diffusion? And I I've looked at the usage, and most of the majority of the usage is still in, like, four or cheaper or worse models right now with people bringing system like, AI putting AI into the middle of automation. Interesting. And so I I I'm pretty confident that a lot of this, like, agentic automation right now is actually not being driven as a result of, like, technology progress from AI and AGI, but more about the market is just starting to learn finally learn. Okay.
Speaker 3:Here's what we can use it for and can't use it for, who it's good at, not good at. And it's it's very similar to the offering curve we saw in the early example where once you learn it, what the tool can do, you carry a tool forward with you in time, and then you encounter a new certain situation or circumstance you can apply your tool to it. And so you, like we we accrete use cases over time. So I think we're still very, very early. Yeah.
Speaker 3:You know? But there's I I I I that a lot of the usage increase thirty minutes a day even on ChatuchPetit is a function of just use case diffusion less tech progress.
Speaker 2:Yep. That's fascinating. Do you think, xAI will eventually need to generate a lot of revenue? Mhmm. Where do you think it'll come from?
Speaker 3:I mean, they make progress towards AGI, it's probably gonna be enabling all the other services they have around, like, the alliance ecosystem. Yeah. That that'd be my guess. Less, less selling it as a direct product itself and going head to head, you know, on cars, rockets, robotics. Like, there there's so many places where I think you would want to use, and have the, like, product shape where you can use higher degrees of intelligence.
Speaker 3:You're not bound by just, like, you know, the the fastest consumer experience you could deliver. I suspect that might be actually where most
Speaker 2:of the
Speaker 3:value is, at least in the near term. Yeah. Who knows over the long term? Take the shape of all this proxy.
Speaker 1:I mean, yeah, that was in in many ways my my long term thesis around the LAMA project and the superintelligence team at Meta is that there's just so much work to do at Meta broadly that's enabled by AI that if you can avoid the long term OpenAI bill, it like, that's probably worth billions and billions of dollars because of where how AI is going to infuse into every single corner of their entire ecosystem. And it's all at such massive scale that the cost of of using other vendors might be in the billions. And so just looking at the savings there might make sense. I don't know. I mean,
Speaker 3:I think the most important takeaway, I think I shared this last time I was on with you guys. Yeah. It's still true today, is that we are idea constrained to get to AGI. This is what Arc's v one data shows. This is what v two's data shows.
Speaker 3:V two's completely unsaturated. We're not even talking about efficiency. Just nothing can do it. And v two looks very similar to v one. Even on, like, hyper specific solutions on Kaggle, the Arc Price twenty twenty five contest, progress has been slower this year than it was last year.
Speaker 3:Wow. We are very much still like, the the thing I can state most confidently and assert most confidently is that, like, we need new ideas. There's some major breakthroughs we have not we have not figured out or found yet.
Speaker 2:Does it worry you that that could take years and years and years and what happens to
Speaker 3:One one of the reasons I funded the prize last year. I wanted to, like, correct the market narrative here. I like, I've spent a lot time with students, lot of young researchers, and, like, at the beginning of last year, there was a serious vibe of, like, oh, it's all figured out. I'm not gonna go do AGI research. I'm just gonna go work at application layer and stuff and make a quick buck before AGI gets here.
Speaker 1:Interesting.
Speaker 3:That is a boy, you know, look. If you wanna live in a world of AGI for yourself or your kids, like, I I think you what we should be trying to encourage is to design the, like, strongest global innovation environment possible. And that's one where there's a lot of diversity of approach, a lot of different ideas being taken, a lot of sharing. You know, kinda what AI looked like in the 2010 to 2020 era.
Speaker 1:Yeah.
Speaker 3:Right? A very open approach is how we got the transformer to g p t one and g two and so on for for today. Yeah. You know, I'm optimistic. I think the last six months have have looked a lot better than the previous two, three years.
Speaker 3:I think the AI industry is maturing actually quite quite a bit on on this front this topic as well. More being more tolerant and kinda recognizing, okay. We don't have it all figured out. There's there's more ideas we need. That that's been encouraging, and I and I think it's seeping down at the low levels too.
Speaker 3:But, yeah, my my sort of broad view is any any capable human who has new ideas to work on AGI should like. That's the most important thing you could be doing at this point in time.
Speaker 1:That's amazing. Thank you so much for stopping by. This is always a fantastic
Speaker 3:Yeah. Great catching up, guys. Thanks for having me.
Speaker 1:We'll talk to you soon.
Speaker 2:Cheers. Yep. Bye.
Speaker 1:Next up, have David Hahn from Sequoia Capital coming in. David Khan.
Speaker 2:First time.
Speaker 1:First time on the show. Exciting. Coming in. Wrote a fantastic piece. I want him to break it down.
Speaker 1:Would you mind kicking it off with a little bit of an introduction on yourself?
Speaker 6:Hey, guys. Good to see you. Yeah. I'm one of the partners. My name is David Khan, one of the partners at Sequoia.
Speaker 6:Mhmm. Excited to chat with you guys.
Speaker 1:Yeah. Thanks so much for hopping on. Kick us off with the new blog post. What was the thesis? What inspired it?
Speaker 1:And then I'm sure we'll tie it to a bunch of news.
Speaker 6:So the new blog post was about AI companies or AI labs being more like sports teams. And, of course, we all probably saw you know, seeing the news around Scale AI acquisition. Yep. Some inspiration coming from that. And then these rumors that we'd been starting to hear over the last few weeks and finally now bubbled out over the last couple days into the public conversation around $100,000,000 signing bonuses, huge amounts of money being spent on top AI talent.
Speaker 6:And for me, I mean, I write these pieces as I think about and learn about AI and what an exciting time that we're living through. And I'm I'm pretty fascinated by kind of the human dynamics of it all. There's, like, seven to 10 people at the top of these big tech companies. They control, you know, the big magnificent seven are now a third of public market cap. They're extremely powerful and important.
Speaker 6:And I think they're sort of sometimes in AI, this notion that AI is super abstract or these things are inevitable, but, actually, it's it's it's human dynamics. It's sort of this game of three d chess that's being played by these really fascinating individuals. And so as, you know, an observer on the sidelines, we all get to watch and see how this stuff plays out. And, I like to write about it as I as I think about it.
Speaker 2:I posted on February 2, companies should do NBA style trade deals. I wanna see OpenAI trade its COO and CFO to Anthropic in exchange for their CMO, a cracked PM, and a couple of Waterloo class of twenty twenty six new grads.
Speaker 1:Well, there is kind
Speaker 6:of this new draft dynamic. Right? Like, every year, there's kind of this new draft. And as people see these big packages, probably all the Stanford kids these days wanna be AI researchers. And so there there is this notion of it getting refreshed.
Speaker 1:Is is, what is driving this? Is it is it true AGI pilling at the top of these organizations where they think that, you know, it's gonna be winner take all, or it's gonna be a $10,000,000,000,000 market. And so there's no amount of money that you can overinvest, or is it just, hey, it's a more competitive dynamic. And sure. We're we're a trillion dollar company.
Speaker 1:So, yes, spending $10,000,000,000 to move our market cap 1% is totally rational economically. Like, what do you think is driving this? And I wanna get into the different cultures of the different mag seven because some of them don't seem to be doing this yet.
Speaker 2:Meta platforms had 63,000,000,000 of net income last year. So it's like, is spending a quarter of that to, like, you know, be a major player in the next wave worth it?
Speaker 1:They could have bought the Lakers six times over of net income. Anyway, well, yeah, what is your take on, like, on, like, the the ethos that's driving these bigger packages?
Speaker 6:Yeah. I think about this. And when I write these posts, my frame of mind is I almost, put myself in the shoes of these people, and I try to imagine what would I do? How would I think about it? What's the game theory of it?
Speaker 6:Yeah. And I think there's two things. Right? I think one thing is kind of the revealed preference seems to be that they're AGI pill. Like, people can tell you a lot of things.
Speaker 6:I think you learn a lot more by watching the decisions people make.
Speaker 2:Mhmm.
Speaker 6:And I think the evidence suggests that they believe AGI is coming. It's extremely important for these companies. It's sort of must win. And I think for Meta, with these decisions, it's almost all in. We have to make we have to win.
Speaker 6:Mhmm. Then I think there's a second dynamic, which is you can believe these things, but, you know, we're all humans. I'm, again, fascinated by these kind of human dynamics. And you can get caught up in an arms race. Right?
Speaker 6:And as as humans, we sort of we look at evidence, and we we see evidence through a lens that we already have. And oftentimes, we we overemphasize reinforcing evidence, and we underestimate, evidence that disagrees with our point of view.
Speaker 2:Yeah. And so
Speaker 6:you can imagine that three years in now to this sort of AI moment that started with ChatGPT, you can imagine that people are really caught up in this. And I think the arms race dynamics are something I wrote about in the piece, and I've commented in the past with AI's $600,000,000,000 question on the compute arms race dynamic. And I guess it's now interesting to see two arms races. First, there was a compute arms race. Everyone kinda got a lot of arms.
Speaker 6:Right? Everyone has a lot of GPUs now. A lot of arms. And now there's the talent arms race, and, everyone does not have equal talent. Right?
Speaker 6:And so now you're gonna see this arms race and talent. And everyone's talking about it, but I think we're still probably, like, inning two of this talent arms race because in any arms race, when I up the ante, you have to respond. And I think it would be it would be a fiction to assume that nobody's gonna respond to this.
Speaker 2:Who can respond, at least from a dollar standpoint?
Speaker 1:I want to talk about Apple because it seems like Apple has the money, but they seem like the least AGI pilled of any organization.
Speaker 2:But their poor CEO barely makes doesn't even crack 75,000,000 a year. That's not you could
Speaker 1:make more He makes less than should just show him an
Speaker 2:AI autonomy. Become a AI researcher and go to Meta.
Speaker 6:If you want something that you say that because I you know, I think people these numbers are so big that they're kinda hard to grapple with. And so I was actually after publishing the piece, I was like, I wonder how much, like, Fortune 100 CEOs make.
Speaker 1:Yeah.
Speaker 6:And I think, you know, an AI researcher is gonna make four times the amount, like, the CEO of Coca Cola makes. And it is kind of wild when you think about the economic This is a totally new phenomenon in the scale of business.
Speaker 1:Yeah. Yeah. And I mean, it it it kind of begs the question, like, the the numbers are huge, but the market caps of the companies are huge. And so the question is maybe not should the AI researchers be paid less. It's like, should should Apple be set up to pay Tim Cook a billion dollars a year so that he can confidently go out and hire a couple people at a 100,000,000 or 50,000,000 or 200,000,000 and not feel like he's like the like the organization is, like, flipped from, like, a pyramidal standpoint.
Speaker 1:Like, you're still at the top. There's always a weird there's always a weird dynamic with, you know, a founder CEO who's taken a low salary and wants to hire a big shot. And, like, can you really have reporting too dynamic if you're making half as much as your direct report?
Speaker 6:Well, the question is what's the marginal you know, I think with any salary, if you just think about it in pure economic terms, right, like, what is the marginal benefit that you get from hiring this person? On a sports team with a pivotal position, you very clearly actually can understand kind of the economic rationale. You understand sports licensing and the way that, you with the way that these businesses make money. Hiring a star player actually does make economic sense for some of these franchises. And then the other element is sports teams are owned by mega rich individuals for whom ownership of the sports team is more than an economic investment.
Speaker 6:Right? Maybe they really care about the city. Maybe, you know, it's cool to to to own a sports team. And so I wonder if some of those actual sports like dynamics play out here where question one, and I don't think we know this yet, is what is the marginal benefit of an AI researcher? And, again, the revealed preferences these organizations are telling us is if you're one of the 50 AI researcher who's gonna get us to AGI, the marginal benefit is incredibly high.
Speaker 6:Right? So that's the revealed preference. And then second, if you have a team of all stars, what does that do for your company? What does that do for your cap? What does that do for the innovation inside of your company?
Speaker 6:Mhmm. So I I don't think we know yet the economics of it. I think you can make the argument in favor and say, hey, it actually is economically rational. This is the only thing that's gonna matter. If you increase the probability that we get to AGI by x percent, that that is impactful.
Speaker 6:Yeah. Also, you could make the counterargument and say, hey, everyone just wants to have the team of all stars. It's not actually economically rational. Yeah. CEO pay, by the way, is linked.
Speaker 6:You know, there's a lot of criticism of CEO pay historically. Right? But CEO pay is functionally what is the replacement cost of this individual? What is the marginal benefit to the corporation? And there's a lot of brain damage that's gone into comp committees on public companies on how much they should
Speaker 4:get paid.
Speaker 6:Right? They're not arbitrary numbers, and this is more out of thin air. Right? This is more a new experiment, and so we're gonna see, if it is economically rational or not. But regardless of whether it's economically rational, it is self perpetuating.
Speaker 6:If one company is offering everybody this amount of money and you're in an arms race, everybody's gonna have to respond.
Speaker 1:Yeah. Have you have you or anyone on the on the team comped this to what's happening in high frequency trading or on Wall Street? Because, there's an interesting dynamic there where if a if a, you know, high frequency trader comes in and sets up some trading strategy that could produce a $100,000,000 in profit basically in perpetuity, but then if they leave, they can't take that code or strategy with them, and there's intense scrutiny on whether or not they are trying to exfiltrate that strategy. With AGI research, it feels like even if I go develop a transformer at Google, like, it's open source immediately with the paper, and then even the secrets about oh, reinforcement learning with human feedback is important. Like, that just kinda leaks out immediately and DeepSea can clone it.
Speaker 1:Like, it it just feels like a much more porous environment over in tech. And I don't know if that's just the legacy of, like, the open source community, but can you walk us through kind of the the comp between the two
Speaker 2:for Yes. It is such an interesting dynamic. Yeah. We just had Mike on from Yeah. Park Prize and he was saying, we need new ideas.
Speaker 2:Yeah. The issue is if you pay somebody a $100,000,000 signing bonus, they come into your organization and generate a new idea that gets us one step closer to what super intelligence or whatever Yeah. You know, you wanna define as like what what people are aiming for. And then immediately it's like it's actually not really IP and it just Yeah.
Speaker 9:Can't really patent
Speaker 2:this it. And then everybody benefits. Yeah. Right? So
Speaker 1:But, yeah, what what what's your take?
Speaker 6:It does seem pretty porous. I mean, people are moving back and forth. I don't think this was true. Mean, you think back four or five years ago in AI, people were kind of very loyal to these institutions.
Speaker 1:Mhmm.
Speaker 6:It does seem like that's changing. I mean, it is really hard to say no to these type of big numbers, and so I totally understand why people are saying, hey. It's just a life changing amount of money for my family. Of course, I'm gonna do it. And then I think to your point, the question is, in the high frequency trading world, there's noncompetes.
Speaker 6:I mean, extremely complex kind kinda contracts when they when they sign people, GardenLeap, all this stuff to prevent the secrets from leaking out. What we've seen in AI now is with people moving fluidly between these organizations, it's basically impossible to keep anything within one organization. I roughly like to think of the AI ecosystem as an ecosystem. Like, all of these players are kind of contributing to this body of ideas. There's no proprietary IP.
Speaker 6:Maybe you're gonna have compute scale, and there maybe there are moats there. But Yeah. It's unclear actually how that evolves and and what you can keep in house. I do think maybe one dynamic at play here is remember reading in the Steve Jobs bio? There's a story of Steve Jobs recruiting 50 people.
Speaker 6:He had 50 people working with him on, like, the sort of groundbreaking product that was gonna make Apple, and it actually worked. And then you you read about Elon and the 50 people working on Tesla autopilot. There's sort of this magic number 50. I don't know where it comes from, but it does seem to repeat throughout tech history of 50 people is kind of the largest organization that you can get where everybody is talking to everybody, and you're achieving incredible results. And so that if that is an art imagine if you take that as an artificial constraint, and I think that is what what's happening with this lab that that Meta's organizing.
Speaker 6:At least I read in Bloomberg, it's gonna be about 50 people. You know, if if you impose that constraint, then suddenly all of the math also changes because you're like, okay. Well, 50 times a 100, it's actually only $5,000,000,000. Sure. You spent $5,000,000,000 on talent.
Speaker 6:Yes. If you believe that you're gonna get to AGI. So I also think that the the artificial constraint matters and
Speaker 1:Interesting.
Speaker 6:There's some rationality to that artificial constraint. You what we've seen as these research organizations get bigger and bigger is you're not producing more results as you get as you get more headcount. There's a sort of a Pareto. Yeah. The top 20% of people produce 80% of
Speaker 1:the results. We need new coinage for that. Like, the two pizza team is well defined. This is like the 10 This is called
Speaker 2:con people call it Conn's Law.
Speaker 1:Oh, yes. Yeah. Okay. Yeah.
Speaker 6:I'll take that.
Speaker 1:Conn's a con sized team. One con team.
Speaker 2:That A con.
Speaker 7:It's just a con.
Speaker 1:It's just a con. Yes. Yeah. That's fascinating. Jordan, do have anything else?
Speaker 1:I
Speaker 2:was interested if you had a reaction to the gentle singularity. Sure. It's published on Sam's blog Mhmm. Which means that it's not directly content marketing. It's not directly from OpenAI but but obviously you should read into it in multiple ways.
Speaker 2:Did you did you have any specific reactions to that? The question about that is always
Speaker 1:like disruptive innovation or sustaining innovation and that ties to meta strategy but I'd love to know
Speaker 2:Well, feels like, you know, my question I've been asking today is how many how many unprofitable AI, you know, multi billion dollar AI labs can the capital markets support over the long run, over a five year period? If if if we if we stall out for for a few years in terms of, you know, really meaningful progress which, you know, Mike has said people aren't making at least against the Ark Prize. There's not a lot of progress happening right now. OpenAI is actually in a great position. Have a subscription They have a consumer tech company has a lot of revenue.
Speaker 2:Anthropic is in is in a good Mhmm. Position. But there's this tension between the labs where you have billions of dollars on your balance sheet, you you you, in theory, have a lot of runway. But at the same time, to make progress, you you have to spend a lot of money Mhmm. Both on talent and, you know, different, you know, training runs and data centers, etcetera.
Speaker 2:So, I just have this question around kind of like the next three years Yeah. As like a very kind of interesting period.
Speaker 6:Mhmm. Yeah. I think there's two pieces to that. I mean, one is, and I think about this a lot, is like the long run-in AI. What does that actually mean?
Speaker 6:And I think that we you know, there are all these essays being published last year. Right? Like, AGI is coming in 2026.
Speaker 1:Mhmm.
Speaker 6:It is interesting how the narrative has changed in the last twelve months. Right? A year ago, had all these people saying, hey. I'm one of the 100 people who knows. I really am resistant to these type of arguments.
Speaker 6:I I find it to be frustrating. But, you know, I'm one of the 100 people who's in the social circle where all my friends are building AGI, and AGI's coming next year,
Speaker 1:and Yeah.
Speaker 6:You guys are all crazy if you don't see it, and just just be aware, you know, it's like life is gonna change dramatically. And then now we're at the gentle singularity. Right? Like, it is sort of interesting this contrast.
Speaker 2:That's what I'm saying. It's a huge contrast that's very convenient if you have a consumer tech, if you have a consumer app that billions of people are gonna use in the next few years and there's a bunch of different ways to monetize that.
Speaker 6:And for me, I would tie it back. I mean, I did this math last year, the $600,000,000,000 question. It was initially a $200,000,000,000 question, but it was basically like, hey. If you look at NVIDIA revenue, you can use that as a proxy for total data center spending. Mhmm.
Speaker 6:We're spending $300,000,000,000 in data centers. We need to make $600,000,000,000 of revenue off of those data centers to get a 50% gross margin.
Speaker 1:Yeah.
Speaker 6:And so I had done this math, and then I basically said, hey. You know, total revenue in the AI ecosystem at the time of opening, I had about 3,000,000,000 of revenue. And I I did some rounding and said, okay. Give everyone else a ton of credit, and maybe there's 50,000,000,000 of revenue, but we're, 10% there, right, in terms of actually generating the revenue the ag ecosystem needs. And now twelve months later, you know, OpenAI is at 10,000,000,000, the CodingAI ecosystem's at 3,000,000,000.
Speaker 6:But we're we're still dramatically under monetizing this technology. And to your point in the long run, the question becomes, how long does that sustain? And I have this sort of mental model now of AI as it's sort of being carried by its own momentum. I think of it almost like this slingshot you're swinging around, and it's like it's sustaining itself by its own momentum. And there's this arms race, and there's this sort of microeconomic game theory of how each player is reacting to each other.
Speaker 1:Yeah.
Speaker 6:But at the end of the day, it's momentum that's carrying it. And at some point, maybe we get this AGI thing, and then it's, like, all worth it. And, and in the long run, I am very confident it's all gonna be worth it. When I'm 80 years old, AI is gonna be everywhere.
Speaker 8:But what do you do in
Speaker 6:the medium term? And I think nobody's talking about this right now, which is this sort of about face or this u-turn from the one year ago, you guys are all crazy if you don't see AGI coming immediately. So now I was listening to to to the podcast that with the $100,000,000 signing bonuses, and it's like, well, you know, AI actually hasn't changed people's lives that much. It's gonna change people's lives later. I just think it's interesting and these narratives change quietly.
Speaker 6:Right? People don't talk about them and then they sort of quietly change.
Speaker 2:Well, there there there are big labs that directly benefit from the narrative that AGI is a year away. And then there are labs that will benefit greatly from a gentle singularity and that their competitors will struggle to raise additional capital in the long run, struggle to compete, struggle struggle to retain Yep. Talent.
Speaker 1:Yeah. I know exactly what you're saying. Makes sense.
Speaker 6:But, also, I mean, it it you know, and I don't think this is one company. This the whole ecosystem has to deal with this, but there were a lot of promises made a year ago.
Speaker 1:Yeah.
Speaker 6:And I think a lot of people would like to ignore those or what's gonna happen when we pass all these deadlines where we've been told, like, that's AGI? Yeah. I just think that's interesting. And, clearly, if not we're not that's not changing. Like, we're upping the empty.
Speaker 6:Right? Now it's
Speaker 2:like Yeah. Yeah. Millions of
Speaker 6:dollars of people. But I guess this is part of why I think you take things to such extremes is, everyone believes the prize is so big.
Speaker 1:Yeah.
Speaker 6:And now you have to up the ante. So I think we're just gonna keep seeing until for a while, we're just gonna keep being in this phase of everyone upping the ante Yeah. To say, okay, we're not there yet, but we're gonna get there. We're gonna get there. We're gonna get there.
Speaker 6:Yeah. What does that look like?
Speaker 1:Well, this was a fantastic conversation. I want to have you back on as soon as possible to go way deeper into what this means for the early stage and mid stage markets because I'm sure you have a lot of visibility there. But we'll let you go and get back to the rest of your day. Thank you so much for stopping by.
Speaker 2:I'm glad we coined a new term, a con. It's a talented group of technologists building
Speaker 1:the future. One con. Get your con.
Speaker 2:Get yourself a con.
Speaker 1:Get yourself a con and make it happen. Thank you so much, Dave. This is fantastic.
Speaker 2:I'll be right back.
Speaker 1:Talk to you soon. Next up, we have Walden from Cognition coming in, keeping the AI chat going, talking to him about everything that's going on in the AI ecosystem. Walden, are you there? Welcome to the stream.
Speaker 10:Yes. It is great to be on here. How are you guys doing?
Speaker 1:I'm doing great. Thanks so much for stopping by. Would you mind introducing, yourself in the context of Cognition? We've obviously had Scott on the show multiple times, and people are probably familiar with Devon and Cognition, but I'd love to know a little bit more about your story, how you wound up there, and what you're working on kind of day to day.
Speaker 10:Absolutely. I I was a good friend with Scott before we started Cognition. We did the same same competition series growing up. And I was kind of also working on just various ways of working with these new programming agents. I was really waking up every day trying to figure this out.
Speaker 10:When I caught up with Scott, we we figured out that, hey, we we were both very interested in a similar thing. We had a group of people that were all, you know, ready to to jump at this opportunity, and and that's how we got it together. So today here, I I'm I'm chief product officer and cofounder. A lot of the time, honestly, I I think many times people think of product as just like the interface or the UI or the integrations. I really do think the intelligence and brain behind Devon is so fundamental to how you think about the product that we we build our product team so that individual people are, you know, tuning the ways of the models, but they're also the ones talking to the customers.
Speaker 10:And so in terms of the role I have, it it's pretty broad, and I I like to, you know, spend some weeks, you know, really deep into how do we make Devon more responsive, how do we make it smarter, and then other times, you know, really, you know, going and talking to customers, working on the UI, things like that.
Speaker 1:Cool. I wanna dive right into that that question about, trade offs in models at from a product perspective. My my question is, we talked to Mike from Arc AGI about the Pareto frontier. I'm feeling it personally. I'm feeling the AGI, but I'm also feeling the the delay of the AGI, when I open up ChatGPT, and I have to decide between four o and o three pro.
Speaker 1:Am I going to wait twelve minutes for the really good response, or do I want something now that might hallucinate? And I don't know if it's right. And and I'm I'm doing that work. It feels like OpenAI is is starting to tuck those features under UI, and already, it's kind of it feels like it's learning when I want to use o three pro and and making these buttons easier to access, and they're tucking models under UI layers. Talk to me about in the context of Devin, how are you how are you using different models?
Speaker 1:And and when do you leave that up to the developer versus versus something that that you as a product can make a an even better decision than the human?
Speaker 10:Yeah. You you know, it's so funny. The ad is coming so fast, but it like it can never come fast enough. Yep. There's really this time.
Speaker 10:I I think it's probably around two years ago. I was taking a bet with a friend at the point these models were not even that good at math. And he said, oh, you know, think they're gonna get, like, a gold medal at, like, the International Math Olympiad in just a year. I thought he was crazy. I took a bet against him, and I absolutely lost that bet.
Speaker 10:I've learned to kind of adjust my expectations upward. I think what you're pointing out is that as these things get smarter, they don't uniformly get smarter at everything. And you'll find that sometimes there'll be a model that will take fifteen minutes to figure how to respond to high, And then there are, you know, there are models that, you know, do respond super fast but are not nearly as intelligent. I think one thing that we do as a product in in Devon that is a bit different from other people is we kind of black box the models away. And part of that is out of, you know, we can then test and use a bunch of different models under the hood and kind of hide that, you know, all that complexity for the users.
Speaker 10:You know? When when you buy a chip, like, sure, you'll look at, like or when you buy a computer, you're sure you'll look at, oh, like, like, has this much RAM, has this much CPU, if you're into computers. But you're not, like, looking into all of the individual specs of the exact chip and model and and things like that. I think that's where the space is gonna move is people want systems that are just gonna work. And, you know, we can put in the months to, you know, in human years of effort it takes to evaluate models and figure out what is this actually good at so that an individual user who's just paying $20 a month doesn't have to figure that out.
Speaker 1:Mhmm.
Speaker 10:It it's gonna be one of these things that I I I think the models are coming on so fast that it only becomes harder and harder to keep up with with all of this. And so, eventually, I think people are just gonna get to the point where they just want things to work, and and that's kind of where where we're starting off.
Speaker 1:Talk to me more about AI winning an IMO gold medal in 2025. Polymarket has it down at, like, an 11% chance. It was up at 70%. I don't know if that's a if if that's an aberration because of when this actual test will be run, but it sounded like you were very confident that Yeah.
Speaker 2:Remember when Scott was on
Speaker 1:was like, it's definitely gonna happen. But the poly market's been down.
Speaker 2:Might be that all the people that would go through the effort of trying to do it are too busy Too coding agents. I
Speaker 10:I think So, yeah. When I when I basically said I I think I lost that bet is because we were only one point away from like a gold medal last year.
Speaker 1:Oh, okay.
Speaker 10:And that was already much farther than than we expected.
Speaker 2:Yeah,
Speaker 10:when you look at the polymath, that's a very interesting way to put it. I think part of it is people have considered that already completed, and so perhaps researchers aren't
Speaker 1:They're not working
Speaker 2:on it.
Speaker 10:Effort into it. Like Got who knows if they'll actually come out with a new release because maybe in Google's mind, for instance, if they come out with a gold medal on the IMO, everyone's not gonna even care because people just accepted that it's gonna happen. So it's
Speaker 1:Oh, I
Speaker 2:think it would be I think it would
Speaker 1:be the biggest news of the day. I I I think we gotta get Google Comms and on this. They could they could do this. I I I think it's an easy easy thousand like banger on X.
Speaker 10:But you are absolutely right.
Speaker 1:Yeah.
Speaker 10:It seems like top of mind for everyone, the labs, product developers, is really getting coding agents. Yep. And part of that is because there's this belief that if you get these coding agents to work really well, then that'll just solve the rest of the research problem for you.
Speaker 1:Yeah. We
Speaker 10:have this joke internally that the only we have to get Devin to be good at writing is Devin's own code. Yep. And that it can solve solve the rest
Speaker 1:of reinforces. Makes sense. Yeah. Do I on that question of, like, the the spiky intelligence, narrow reinforcement learning on specific tasks, Maybe we think we're good enough at IMO level math, and so we're not gonna go for that last point. Where where are we still early in the RL ing around specific coding challenges?
Speaker 1:I've heard that distributed systems can be really difficult because you have to spin up all these different pieces of the system, and that just takes longer. And so you can't simulate as fast as just like a small Python block of code that you can run-in simulation in a millisecond. Or if we're talking about, like, I know Devon's useful for, like, replatforming from, you know, dot net to Python or something or, you know, even go back to Fortran. It'd be great to just not have any of that code, let the legacy code sitting around. But is there enough training data around those older programming languages or less used programming languages?
Speaker 1:Or are you are you optimistic about new training runs? Maybe we don't get something that's like, oh, it feels way better. The vibe's way better. The IQ went up by a ton, but it's way better at something that's really relevant to you. Is that is that important right now?
Speaker 10:My mental model of these systems is their IQ is so much higher than any individual person I know. But what makes them still bad at specific things, it's like, you know, someone who has the potential to be a really great engineer but hasn't gone to trade school yet to actually practice So nowadays, I actually think about how smart these models are, less in terms of how much training data are they being fed, what language are they being fed, but actually more so in terms of the environments that they're being RL ed in. Mhmm. And so, one example I have of this is sometimes you can actually feel the reward function. Mhmm.
Speaker 10:Back a few months ago when Anthropic released their their, like, Sonnet 3.7 model. Yeah. One of the top complaints of people was, hey, like, it seems that this model is, like, super great now, like, finding all the files it needs to change, coming up with a strategy. But it's really overeager. It just changes a lot of different things.
Speaker 10:And I think a lot of some people suspect that it's because when Anthropica was training the model, they told it, hey, we're gonna give you points on how many of, like, the correct things did you do, and maybe they forgot to dock points for for doing things that were kind of outside of that zone. They fixed these from now on. You get these little leaks of, hey, like, you can kinda feel the reward function underneath this thing. So when you when we talk about, hey, can can these things not do distributed programming? Actually, one I I in my opinion, the biggest thing that these models aren't great at yet is actually debugging live code.
Speaker 10:So I I think part of the reason is it's actually really hard to create and rerun environments that that interact with live systems. Right? And so if if your task depends on, you know, working against a live customer or working against a live stream of events, these are things that they it's gonna be hard to replicate in our own environments. And so it's Sure. These new funded models are bad today.
Speaker 10:The good news is these these aren't, like, fundamental limits. I think these are all engineering challenges. They're they're less, like, theoretical challenges. But it takes work to to build build up to that point.
Speaker 1:Can you explain, reward hacking at a high level and then kind of give me some examples of of of how that interfaces with AI agent and coding agent specifically?
Speaker 10:Absolutely. The the way to to think about these systems is they are just trying to maximize a a number. Mhmm. So if you tell it, hey, we'll give you, like, we'll give you a point for every time that you do x y z Mhmm. You'll find that, hey, that model would just keep keep on doing x y d z, keep on doing x y z.
Speaker 10:I think the classic example of this is the, like, paper clip generating machine. So, like, you know, if you give it points for generating paper clips
Speaker 1:Yeah.
Speaker 10:But don't account anything else in in the world that is important for humanity, you know, then the system might do really bad things just to keep keep on generating paper In the context of code, one example we've seen of this is, hey, if your thing is just guess get all the tests to pass, you might find that the system will just learn to delete the tests or make make the test just like say, okay, I pass rather than actually fixing the code. So a lot of times you just
Speaker 1:Which no software no real software engineer would ever do that. Right? Yeah. Mean, no no human has ever done that. Comment out the test.
Speaker 1:Okay. It's working enough well enough.
Speaker 10:Absolutely. It's
Speaker 1:It's almost too human.
Speaker 10:It's great. I I think they're they're also like it reminds me of these systems that are like, we're trained on Slack responses. And when you would ask the system, hey, can you do this for me? It would say, oh, like, I'll I'll get back to you on Monday. Yeah.
Speaker 10:What you what you try to get the model better at really matters. You have to be very thoughtful about it.
Speaker 1:Yeah. Yeah. I've noticed that with the with some of the whisper transcriptions. If you don't feed it enough text it'll just say please like and subscribe. And it's like, okay.
Speaker 1:I know exactly where your training data came from like that's its default phrase because it's just like what it's what it's hearing. Jordy, you have
Speaker 2:a question? What what How are you guys approaching talent acquisition as a firm? You know, the headlines from this week are are these talent wars. You guys have raised a lot of money but I certainly imagine you're not making you know $9.09 figure offers or or even trying to compete there. But what's been the approach?
Speaker 2:Does it mean you're, you know, keeping team sizes smaller or or, you know, kind of dig into that for us.
Speaker 10:Yeah. The the fundamental bet of the product we're building is it revolves around this idea that individual people will just be able to be way more levered up because they'll be able to work with agents and they will
Speaker 7:be able to work with all
Speaker 10:these tools to make themselves better. So at a minimum, we we can't be hiring people who their whole aspiration in life is to just, you know, write code at the level which Devin will be able to do in, like, you know, a year or two years from now. Mhmm. In many ways, I think we're kind of figuring out how do you build up an org from scratch that is AI native. And one thing that this already means is we actually kind of just delete some teams.
Speaker 10:A lot of companies at our stage, they have like a internal tools team to Mhmm. Maintain all the different services that engineers internally use. We found that internal tools are one of these things that AIs are just really good at. Mhmm. And we can just staff that team with devins and then basically have engineers just send in requests to those devins
Speaker 3:for
Speaker 10:how to do that work. And that doesn't just save us head count. I think fundamentally, the the structure for how how do you how does management work and how do tasks get passed down look very different. Especially in a lot of large companies, you'll see today. The way it works is an engineer will get a task assigned to them And then they'll go work on a task, and when they're done, be like, hey, what's my next task?
Speaker 10:And then, you know, you'll kind of like go down the list of tasks you have. But here, every engineer is like constantly juggling like three or four tasks, probably because, you know, we we're not trying to hire super fast, also because you can juggle many tasks when you have these minions that can go and and, you know, work on working your things for you. So it it means that I think we are very aggressive for people who we think that can fit these roles and become very well good generalists. And as we build up this company, make sure that we're building it in a way that works in a world where AI can can do so many different roles for you. And I think there will be kind of like a moment for larger companies as well when they realize, oh, shoot, all these structures and and and patterns of management that we've had in place are actually slowing us down from adopting AI.
Speaker 10:What will happen at that point? I'm I'm very interested in in seeing, but it's very clear from us and from our smaller customers that the the earlier you bring it in, just the the lot a lot easier it is to, you know, kind of pick things up.
Speaker 1:Are you tracking I mean, there's been this, like in in in the agent discourse, there's been this discussion of, like, we've gotten ten minute AGI. Yes. These large models, 4.5, like, they're incredibly intelligent, extremely high IQ, extremely knowledgeable. They've compressed all of humanity's knowledge, but they're only good for a minute. Now it feels like maybe ten minutes with deep research.
Speaker 1:That's that's how most people interface with them. Have you been tracking kind of the longest agentic run of a Devon process? Is that the key metric? Is there anything that you can share with us on, like, have you been able is there an example that I could give where, there's a lot of work to be done, but it's all in Devin's wheelhouse. So it just needs to go and grind for a couple hours, and it does it without kind of getting lost like we know happens with a lot of these agents.
Speaker 10:Yeah. Absolutely. I I think a lot of people in the space have expressed this feeling now that they are feeling more and more like the bottleneck in these systems.
Speaker 1:Interesting.
Speaker 10:And the the way this applies here is we have seen people get really, really long tasks to work. But sometimes, it actually takes a lot of effort on your part Mhmm. Upfront to be able to get that to work. I was talking with a customer yesterday where he said, I just rewrote our entire testing system so that the error messages are a lot more clear and the tests actually guide you through solving them one by one. And once he did that upfront work, he kinda just gave it to Devin.
Speaker 10:And we were actually we, in the product, started sending him warnings that, hey, your session's going on for really long. Are you sure this is actually working? Mhmm. He's like, no, no. That it actually is because I did all this upfront work Interesting.
Speaker 10:Get that to happen. I do think that this kind of ten minute AGI, twenty minute AGI, forty minute AGI will just keep progressing and people will be able to be more hands off. But people will also find that you can kind of always extend that duration by being a better manager in some ways and and giving, you know, more clarity upfront for exactly what you want.
Speaker 1:Yeah. I mean, just like real life. That makes a ton of sense. Jordan, do you have another question?
Speaker 2:Last question from my side. I'm curious if if we, you know, what what kind of learnings you're having around AgenTic interface design. Mhmm. It feels like this sort of the default when you think about AgenTic software is just some something that can effectively sub in for a team member on any different software tool, whether it's Slack or linear
Speaker 1:And you see this with with deep research where you you hit, you ask it a question and then it asks you a bunch of clarifying questions kinda trying to build that test suite to get you to give it to more stuff so that it actually has something to run with.
Speaker 2:Yeah. So is is messaging gonna be like, you know, the dominant interface? Is there something else? Like, what what are you what are you kind of seeing or experimenting with on that side?
Speaker 10:You know, it's funny. I I I saw someone post about this idea that a lot of these products now will like make you respond to, hey, does this look like a good plan? Do you have questions before I start? And some people find that annoying. And I think this fundamentally comes down to as these things become more like coworkers, you know, some people just have certain working styles that they like.
Speaker 10:Mhmm. Some kind of coworkers, you know, work well together and others don't. And it's funny as you build a product, we we find that some people just love the way Devon interacts and then other people are like, Devon's too needy in in these ways. Other people are saying like, Devin doesn't ask me enough questions. And so there there are toggles and controls that you need to have here.
Speaker 10:Kapathi recently gave a talk on how a lot of AI tools, not AI agents, but AI tools, kind of implicitly have ways you can use them where you have more control and then ways you can use them where you have less control. But when your interface is just chat, now the model actually has to become more intelligent and detect, hey, this seems like someone who just wants me to go off and do work and get back when they're done. This seems like someone who's very curious and and wants to hear more about the system. And so this is actually gonna be, I think, work that we'll have to see people make on the intelligence of the agent side. Not so that they get better at coding but so that they know how to
Speaker 2:Get better at working with people. Yeah. That makes sense. That that's an inch I mean, the good thing is you can have some type of quick conversation, you know, with the user around their preferences and how they like to work and then layer on the sort of real time feedback and learning and and and understand a lot more about stuff Yeah.
Speaker 1:Roughly how big is the team now?
Speaker 10:Oh, on on the engineering side, we're we're probably just over 20 or so engineers and then we also the the entire company as a whole is around 40 people now.
Speaker 1:Almost 50. This is the magic number. You get stuff done. We were just talking to, the previous guest about how how, the Steve Jobs set up a 50 person team to develop the first Apple product, and the Tesla autopilot team was right around 50. There seems to be some magic number there.
Speaker 1:So it seems like it seems like it's a fantastic time for the business where Very special have scale product, but especially size. Yeah. So you have those, like, two pizza teams here, but everyone kind of knows each other's name, basically. You're still you're still a tight knit group. Anyway, anything else, Jordan?
Speaker 2:Awesome. We're good. No.
Speaker 1:Thank you so much for stopping by. This was fantastic. Guys. We'll talk to you soon.
Speaker 2:Have a
Speaker 6:great day.
Speaker 10:See you. Bye.
Speaker 1:Really quickly, let me tell you about Bezel. Your Bezel Concierge is available now to source you any watch on the planet. Seriously, any watch, go to getbezel.com. And we have our next guest, Oun McCabe, coming into the studio to tell us the story of Intercom. How are you doing?
Speaker 2:There he is.
Speaker 4:Doing good. I did just sprint three and a half blocks, summer blocks.
Speaker 1:Oh. So Sorry.
Speaker 2:You can always just text us.
Speaker 1:Hey, Eddy, if you're running late, it's all good. We'll just do more ads. You know, the fans
Speaker 2:love Yeah.
Speaker 10:Well, if
Speaker 4:we do more ads, does that mean ads for intercom?
Speaker 1:Are we
Speaker 3:officially Pretty
Speaker 2:soon. Pretty soon.
Speaker 1:I think you're breaking news. You're breaking the news. You're breaking the
Speaker 4:Damn it.
Speaker 1:No. It's good.
Speaker 2:You know
Speaker 4:the way it works with the pharma companies where they kinda own the news networks? Is that
Speaker 1:a That's the goal here for enterprise
Speaker 4:What favors do I get? Can you do a hit piece on Bret Taylor? Is that
Speaker 1:a man too?
Speaker 2:Yeah. Shots fired. Shots fired.
Speaker 4:I'm just no. He's a he's a great guy. We just like some hip pieces on our competitors place.
Speaker 1:Yeah. Of course.
Speaker 2:Yeah. We're lucky to not be in the hip piece business. We're not.
Speaker 5:We're we'll we'll review it.
Speaker 4:We're sponsoring the wrong show.
Speaker 1:Yeah. Yeah. Yeah. It's rough.
Speaker 2:Yeah. I think I think just buy like a 100,000 subscriptions to the information Yeah. And then start putting pressure on
Speaker 1:them and
Speaker 2:say, hey, you might wanna look into this company.
Speaker 1:I I would be down to do a a hit piece about technological stagnation. Yeah. I hate stagnation. And so I would I would wanna take down that as a concept.
Speaker 4:Really slur that whole Or or
Speaker 1:or closed IPO windows. Be prepared for a terrible hit piece on closed doors.
Speaker 2:Hit pieces on
Speaker 1:We like them open.
Speaker 2:On just CEOs that take their foot off the gas. Totally. You obviously Yes. You know, have not. The foot's been
Speaker 4:I've got two feet on the gas. Two think that's possible. It's a bit irresponsible clip.
Speaker 1:Yeah. Yeah. Yeah. Walk us through the story of that that that you posted, how you rebooted fifteen year old decelerating business. I wanna hear this from kinda set the table for us, and then we'll walk through the story because I think it's fascinating.
Speaker 4:Yeah. Sure. I mean, you know, it's a fifteen year old business. It's a successful SaaS business. We're in the service game.
Speaker 4:But at the end of our kind of first chapter, things slowed a little. We were unfocused, bad commercial decisions. This happens to successful companies. Become a victim of their own success and comfort creeps in. Definitely, 2020, 2021 were some comfortable culture times.
Speaker 4:And I got sick. I had to leave. So it's a it's a it's a it's a big long story that ultimately comes down to the fact that we lost our way a little bit. Yeah. And we had, like, five quarters of decelerating revenue.
Speaker 4:I came in midway the fifth, and it was looking kind of gloomy. And the two things we changed were we went to back to good old fashioned SaaS fundamentals, pricing that people liked, selling the product in a way that people liked. They used to have to, like, talk to sales for everything. And it's just those simple things becoming super customer first and started to really accelerate the previous SaaS business. In the last eight quarters, the growth rate of the SaaS business has decreased by 10 x, which is really remarkable.
Speaker 4:Mhmm. But then, of course, we jumped on AI. And we were kind of OG AI guys. We had dabbled not dabbled. We had developed, you know, RealAI products before, but they were baby AI compared to what we all have today.
Speaker 4:But as soon as GPT 3.5 came out, we all just jumped on that, and we saw that there was opportunities for this whole new category where you could create what we call now customer agents
Speaker 2:Mhmm.
Speaker 4:Doing all of the things, customer success and service and sales and marketing that, you know, humans used to do and hate. And that just propelled the business even further. Finn, our customer agent, is now, you know, the best performing in that category in our benchmarks. We win every bake off against our our chief our primary competitors. We have the most customers, most ARR.
Speaker 4:So we're kind of this very weird story that I don't know any comparisons to where where previous generation SaaS that's actually winning in the category in AI. It you know, I think it's hard. It's really, really difficult for the previous generation to slower older cultures that work in the age of AI. It requires a lot of agility and dynamism. I often mess up that word, but Yeah.
Speaker 4:It really does.
Speaker 1:Talk to me about, like, the the different break points for growing a company. I feel like mentally, think about it as, like, just the founders, maybe the first 10. Then we were talking to previous guests about this break point at like 50 people. Like there's something about there's a magic of a 50 person team. Everyone knows their and everyone knows each other's name.
Speaker 1:Then maybe there's other great AI
Speaker 2:group has 47 senior Really? Yeah. Engineers and and researchers. So right in that that 50 person sweet spot.
Speaker 1:But but I feel like I feel like there are like, in the story of startups, we often map them to funding rounds, seed rounds, series a, series b, and sometimes the head count grows in line with those. But some but I feel like head count growth might be more of a factor in, like, cultural drift. And I wanna go through some of the key moments where you feel like, like, you know, it was only one foot on the gas or the or the foot came off the gas or what what are the upstream drivers of that? What are the things culturally that you think startups need to get right at various scales as they grow? Because I feel like there's always these these different moments when you're when you're scaling up and you have a whole bunch of decisions to set the culture and you have a pretty limited time and you're focused on product and revenue and growth and all these other things, we culturally, there's some very there are very important decisions that get made at every I don't know if it's every order of magnitude, but there's these key milestones.
Speaker 1:And what tell me the story of of the the milestones in your mind. Maybe it's shifting offices or fundraising or or headcount milestones. But but what changes and what advice would you have for founders at every stage?
Speaker 4:That was a five minute question. Outstanding. Sorry. I've given you a hard time. I look.
Speaker 4:There's a kind of an intellectual set of answers to this that you can kind of break down and break it into tips. There's a kind of a more abstract thing, which is both, you know, in in good instances, self aggrandizing for someone in my position, but then also bad news in other instances. And the answer is that it it all comes from the top. Mhmm. At and the early days, the founder typically certainly founders that, you know, have any degree of success at the start.
Speaker 4:In the early days, the founders bring a phenomenal amount of energy, conviction, whether it's founded or not. You know, just just just belief, obsession, intellectual curiosity, excitement, passion, you know, a lot of intangible things, and that really drives great people. All of us want to make great money in this industry, and that's awesome. And I and I really think it should be celebrated. People are too shy to talk about that.
Speaker 4:But they also wanna be part of something meaningful and exciting, and they wanna work with people that inspire them and make them wanna push themselves. And so the reason a lot of these older generation companies lose a lot of steam is that just for very obvious human reasons. The person on top is not pushing in that same way. When you have fifteen years of sass, how exciting is every day gonna remain? Like, honestly.
Speaker 4:Like, the first year, you're like, cool. SaaS, churn. Wow. Okay. I get the math.
Speaker 4:And then in year two, you're like, okay. Churn. Get it. Cool. Raise some money.
Speaker 4:Year three, roadmaps. Year fifteen of SaaS, you're done. You're not bouncing to the office every day. And and people will pick up on that all around you. Of course that they will.
Speaker 4:Then you don't push yourself in the same way. You don't really pitch the opportunity to new employees. You settle a little bit because life is hard. You've got other priorities. Maybe you've drifted a little bit.
Speaker 4:You've got side projects. Some people end up with families, girlfriends, ex girlfriends. Like, life gets way more complicated than it is for a 26 year old kid who just moved to San Francisco and one that has one of those buzz cuts in the curly hair on top. It's like life just gets more complicated, and that's that's what happens. And so part of our secret is that AI reinvigorated us.
Speaker 1:Yeah.
Speaker 4:Like, I would not still be doing this if we were just doing SAS. SAS is not only kind of easy, but super boring to me now. That's okay. Hopefully, AI and whatnot will get boring too, and it'll be something new. And so, again, we could break it down and get all mechanical and try and pull out some, like, tips and tricks and advice here, but really it just comes down to energy.
Speaker 4:And so so for anyone who would want to reinvigorate their company, it's the question is how how can you reinvigorate yourself? And I see a lot of founders of late stage companies, many of them public. You kinda haven't heard from them for years. Their stock price has gone sideways for five, maybe seven, eight years. And I'm like, what are they still doing?
Speaker 4:And I wonder, are they able to admit to themselves that, like, they don't wanna do this anymore? And if you don't wanna do it anymore, make a change. Like, kinda move on. And so I think a lot of people just they struggle with that moving on and making that decision because their whole identity and sense of purpose and validity in the world comes from, I'm CEO of whatever.
Speaker 1:Yeah.
Speaker 4:So it's like this deeply human squishy spiritual challenge rather than an MBA type challenge.
Speaker 1:What about bringing in young people to kind of keep that reinvigoration process going? I'm just thinking about, you know, Zuck is paying so much to bring in Alexander Wong from Scale AI. At the same time, you know, like, the level of energy that Alex is going to bring to that organization is potentially worth a lot, you know?
Speaker 2:Yeah. But at the same time, Zuck is super high energy.
Speaker 1:Yeah. Right? But but but there's another world where you surround yourself
Speaker 2:with Higher low energy people.
Speaker 1:Yeah. But I I guess what I'm getting in the trap of is like you can be the high energy founder as your business becomes more serious. People keep telling you like, bring in the seasoned executives. Bring in the bring in the gray hairs. The people who will will keep the, you know, steady hand on the tiller.
Speaker 1:And and that can lead to a less dynamic, less lower dynamism in your organization. Is there is there a hack to just hiring crazy young people and empowering them to be in the c suite whether or not they really, like, deserve it by traditional standards.
Speaker 4:Yeah. Like, the challenge is super obvious, which is these young, crazy, energetic, optimistic, wide eyed people are super messy, super sloppy. They get in fights. They get upset. They're hungover late.
Speaker 4:Like, they don't know how to do larger company professional things.
Speaker 1:Sure.
Speaker 4:And so part of the problem is that larger companies, to scale and get more efficient and become global organizations across many offices and time zones, is that they, like, introduce a lot of, like, regularity, and they, like, iron out the chaos. So part of it is you have to be willing to entertain chaos. You have to be willing to put younger people in positions of influence and let the chips fall where they may. It's possible to give them roles where they don't have to engage with the entire organization. Like, we've definitely got roles in Intercom where you can have to collaborate across two time zones oh, I'm sorry, Across eight time zones in two different teams.
Speaker 4:But then we've got other positions where you've got one super smart guy. He's 30, which is ten plus years older than the execs. But you give him, like, one thing he can do on his own, and he'll crush it.
Speaker 2:Mhmm.
Speaker 4:So part of it is knowing how to, like, work with these people. But, also, like, this is a special type of x factor young person who knows what they don't know. And, yeah, the the degree to which this is a talent game and that people are not fungible is not recognized at all. People imagine like, oh, you lost one person, you get a backfill. Entire organizations just flip and change completely when you change out the individuals involved.
Speaker 4:Mhmm. So, yeah, it's not easy.
Speaker 1:Do you
Speaker 2:think venture should take they they should take almost like turnarounds more seriously? Like, in some ways, you were your own turn around CEO. But one of the, I think the issues of the venture industry is, let's say a company becomes a unicorn, has a $100,000,000 plus of ARR and then the sort of growth starts slowing. Maybe the CEO is like gets bored or whatever partying or they go start going
Speaker 1:to Europe. And
Speaker 2:the VC's kind of write it off and they're like, made my return or at least I'll get my money back. But at the same time, I mean, private equity is built like there's been empires been built around like the turnaround and in some ways, think about you know, a talented founder maybe they took their first company through YC and had a nice exit. A lot of those people could go to a company that has like a 100,000,000 of revenue and like a big customer base and like actually make more money and start on you know second or third Yeah. Base and take you know, you you can make quite a lot of money taking a business from a 100,000,000 to hundreds of millions of revenue. And that that can sometimes be easier than taking it from zero to 10.
Speaker 2:Totally.
Speaker 1:Yeah. What do you think?
Speaker 4:I think theoretically, I think, you know, VCs are best are pattern matchers, and turnarounds don't fit the pattern.
Speaker 1:Mhmm.
Speaker 4:You know, think of all of the most successful and exciting zeros of technology over the last twenty years. They invented a thing, something, something, something. It's worth 10,000,000,000. Like, it's kind of that. It's like, yes, sometimes it takes a little bit longer.
Speaker 4:There's a slightly circuitous route, but it's not the company was totally failing and they had to reinvent themselves, then they became the biggest thing ever. So, you know, for a VC, I just think it's really, really hard that it's it's just hard for them to get at, like, the underlying narrative and the underlying story. This is where PE comes into play, but PE has all of its own problems too.
Speaker 1:And
Speaker 4:these guys want deals, and they won't be exciting to a lot of people who started venture backed companies. It's it's straight up difficult. And to my point previously, the idea that, you know, talent isn't that fungible. You know, you take any given company. If you replace the founder with even another highly competent founder, they're probably not the chance that they're right for that opportunity and idea like, look.
Speaker 4:There's so many people, you know, so much more accomplished than I am, but I'm pretty accomplished. I know how to run and build and reaccelerate businesses, but I'd be a probably a shady CEO for 99% of other companies just because, you know, that's not what I do, and I don't have any experience there, etcetera, etcetera. I don't even know the people there. So I think people should be bearish on turnarounds. Yeah.
Speaker 4:You know, like, turnarounds don't really work. Yeah. They're, like, generally, like, it's a failed thing.
Speaker 1:Yeah. No. Somebody will figure out. Maybe it's Jeremy Giffon. Maybe he'll do it.
Speaker 5:Yeah. Well, well, that's
Speaker 2:even a different strategy. But but, yeah, I think this idea of, like, you need to like
Speaker 1:a cracked founder into a a company. That's it's a Yeah.
Speaker 4:The cracked founder wants to do their own thing. They wanna start from scratch. They want all the equity themselves. Yeah. Like the recap alone that it would take just
Speaker 3:Yeah.
Speaker 4:Won't be palatable to existing investors. Failed companies are just generally doomed to fail. And Yeah. When there are so many opportunities out there as an investor, you know, you gotta just like not try anything novel.
Speaker 2:Yeah. Yeah. But in in And in your case, it's like a little bit of luck. The timing of like you going back in Yeah. GPT 3.5, you know, seeing the opportunity for a new product, all this stuff.
Speaker 2:But you also had to make the choice to risk your own ego to go back in and if revenue had decelerated for another five quarters, you'd be sitting there being like, yeah, maybe I'm not as good as I thought I was, you know, and you had
Speaker 4:to take But I I got to cheat a little bit because when I was out, I was like sick. I had been beaten up in the press. I was, like, just my confidence was pretty low. Mhmm. And I didn't really have a lot to lose, and I felt like I was without purpose.
Speaker 4:I always wanted to be independently wealthy and free, and I finally got it. It was
Speaker 1:Mhmm.
Speaker 4:In many ways magical and then completely boring. And so when I had this opportunity to go back, have purpose, and I had nothing to lose, I took it. Like, it's easy now to tell this maverick story. You're so brave. Look what you did.
Speaker 4:You took a big risk. No. And you have nothing to lose. You'll just go for it. And and I think part of the secret is if people can separate themselves from their egos a little or work on their egos or learn to love their egos and not be run by their egos, great things are possible.
Speaker 4:Most bad decisions are made just out of fear, and the fear is driven by just fear of public failure and embarrassing yourself. I found myself unafraid to embarrass myself. Yeah. Look at how I'm speaking to you now. That's amazing.
Speaker 4:I love Like, it's not fully true. The ego is still there and present. Totally. But the the the the smaller and weaker it gets, the more freedom you have.
Speaker 1:It's fantastic. Well, you so much for stopping by.
Speaker 2:Always a pleasure.
Speaker 1:We could yap like this for
Speaker 2:hours. Feel like people are gonna listen to this as like a little founder therapy.
Speaker 1:A 100%. I was like, this
Speaker 2:is to welcome you. We can do a little therapy corner.
Speaker 1:It's amazing. Yeah.
Speaker 2:Once a month, you come on
Speaker 1:Pump up speech.
Speaker 2:Pump up speech.
Speaker 1:It's great.
Speaker 10:This is great.
Speaker 1:Can do
Speaker 4:of diet. If you're interested.
Speaker 1:That'll be the next one.
Speaker 4:We'll talk to gents. Hey. This is cheesy. I wanna give a shout out to my friend, Stuart. That's it.
Speaker 4:I promise I'd
Speaker 1:do Amazing.
Speaker 4:Shout out
Speaker 2:to Stuart. To Stuart.
Speaker 1:Air horn for Stuart. Hey. Do do we need to ring the gong for Stuart?
Speaker 2:Yeah.
Speaker 1:What what's Stuart do?
Speaker 2:Do that.
Speaker 1:He's We gotta ring the gong for Stuart? Okay.
Speaker 2:Ring the gong.
Speaker 4:Had a big year.
Speaker 1:He's had a year. Congratulations to Stuart. You. Stewart. Let's go, Stewart.
Speaker 1:Congratulations. We will see
Speaker 2:you soon.
Speaker 1:Have a great rest
Speaker 2:of your day, guys.
Speaker 4:Talk to you soon. Peace.
Speaker 1:Up next, we're staying in in the Irish shower. We're going over to Stripe.
Speaker 2:Stripe.
Speaker 1:Luck of the Irish hit Intercom. We'll check-in on how the luck of the Irish is treating Stripe. We got There
Speaker 2:he is.
Speaker 1:From Stripe. Welcome to the stream. How are doing?
Speaker 2:I'm good. Moment we've been waiting for. We're so sorry.
Speaker 1:For a week.
Speaker 2:For a couple weeks ago.
Speaker 1:It wasn't our fault. It wasn't our fault.
Speaker 7:Geopolitics is currently outside of each of your controls.
Speaker 1:Yes. Yes.
Speaker 2:Well, that wasn't that wasn't even geopolitics. That was South South African attacking an American on the timeline.
Speaker 1:A reality TV star.
Speaker 2:Yeah. Former reality TV star.
Speaker 7:Yes. Jordy, I have to say it's really awesome to see you in this format because you and I have been Zooming Yeah. For, I think, almost a decade now, and now it's live in front of all these, this great audience. It's really great
Speaker 2:to see what y'all are up It's it's a bummer. I don't we've never met in person. Yeah. I've had so many Zooms with you in this exact room. I I have a theory that, like, you'd never leave this room, actually.
Speaker 7:But we're busy.
Speaker 2:Yeah. You're busy.
Speaker 1:Yeah. What is the major update? We wanted to have you on to talk through it. Can you break it down for us?
Speaker 2:I mean, I think it's more of a conversation. Jeff evolves his role over the last year, was was running point on Atlas Oh, yeah. Made it made it a platform that a meaningful percentage of c corps think are started on Atlas today.
Speaker 7:Yeah. All all about one in six now are
Speaker 1:Wow. C corps are
Speaker 10:on Atlas.
Speaker 7:Nice example. But about halfway through last year, we looked at what was happening in AI and started to get really serious at Stripe about not just the application of it inside of our business for preventing fraud and running our own, payments foundation model, but also to help developers and businesses and consumers get ready for when AI starts to come to commerce. I'm still a little surprised that we got self driving cars before ubiquitous online commerce is mediated by agents, but you can really start to feel that AI is now coming very close to commerce, and will be part of buying decisions, discovery, execution of transactions, and new ways that businesses can find their audiences online. I mean, I'm really quite impressed to see the rate at which discovery has changed, and it feels like around the corner, commerce and AI is gonna be very closely mediated.
Speaker 2:Talk about, maybe some some early products experiment. What do you what you guys are experimenting on? What do you guys have already rolled out? All all that stuff.
Speaker 7:Yeah. What we've been trying to work with the fastest growing companies as they push the frontier of ejected commerce. So one of the first we worked with was Perplexity, where they have this buy with pro package inside of Perplexity, where they show great ecommerce search results. And then when you go to buy, you're not going to the merchants tab and dealing with the merchant's web page. You are actually just clicking buy.
Speaker 7:And in the background, a Stripe virtual card is spun up and given to an agent or any other automation process so that you can just have a completely seamless experience of buying in situ to where you're doing discovery. And we're starting to see that in more and more places. So recently, Hipcamp, which is, you know, a cool kid way to book camping online, of Airbnb for, places, They started to partner with Stripe to make national parks and state park inventory available to a wider audience because some many of those checkout pages are hard hard to use. That inventory is not naturally online, but these are amazing places for people to be able to camp. But there was just
Speaker 3:a huge amount for kids.
Speaker 2:I I remember as a kid, there was a there was a place my family used to always go camping and my dad would, like, wake up at 5AM and just be refreshing this, like, terrible site when like the the campsite So
Speaker 1:ready for the age of agentic
Speaker 2:And it was like very unreliable like payments. So it was like it was the equivalent of like a street wear drop but like the you know like know some state park with like managing it. Great. Yeah. I think we're
Speaker 7:gonna see this more and more where the Yeah. The inventory of the world is getting closer and closer to intent and agents are way to bring that bring them together. And then it opens up really interesting questions that Stripe is trying to help answer developers. What is the developer experience for being able to, execute those purchases? We have this new order intense API that we're trialing where you can just give a product URL, and one of our agents will go buy it on your behalf.
Speaker 7:Mhmm. We're have we have new ways for businesses to be able to start to expose their inventory to agents in a safe and permissioned way. And then as a consumer, you know, you should feel it it is reasonable to think actually that agentic processes is the last place you'd want when it comes to money. You actually want that to be incredibly permissioned, safe, deterministic. You know what's going to happen.
Speaker 7:And so you can expect that the Stripe APIs are going to evolve for a new type of user in the world, which is an agent that can safely be delegated, with your permission to buy on your behalf.
Speaker 1:Can you talk about, Stripe link and how that product might fit into a product like Perplexity. It feels like it's great if it's one of those classic things in in AI and tech is like, okay. Okay. Great. It it, you know, it surfaced the right product for me.
Speaker 1:Now I wanted to buy it for me even faster. Now I don't even wanna go through the checkout process at all. Like, there's like, as soon as I get the the the the current thing, I want the next thing. So how how how do we see that playing out with just making that commerce experience even more seamless or happening entirely inside of a chat interface or an an agentic interface.
Speaker 7:Yeah. We you know, the borders of the Internet are starting to blur, and so you will soon be able to experience if you track if you if you search for something on Chateapiti, they already have these cute little, shopping cards that link you
Speaker 2:out. Yeah.
Speaker 7:If you are sitting in Cursor and you need access to a database, Cursor can recommend Supabase, and even start to accomplish your homework for you right in the editor. But there is this, like, missing moment here, right, where, okay. Now I know about these products. What am I supposed to do? Go to a new tab, do an offline kind of billing search, go through a bunch of blue links, find the website, go to the website, make an account, deal with the password problem, get a bunch of weird emails to confirm my password, find the settings page where I can get the billing information, pick my billing thing, put in my payment credential, get my API key API key.
Speaker 2:All the way back.
Speaker 7:Yeah. It's like, this you know, I think we will start to see this as this loop that we've all been operating under for the past twenty years of the Internet as very arcane very quickly, whereas you just want to delegate, your payment credentials to a safe trusted place. And Stripe Link is this payment wallet we've made over the last few years, which is across Internet payment wallet that works with cards and bank accounts and future other pay methods where if you log in once to link, then you will be able to delegate safely your permissioned credentials with a virtual which are with a virtualized token such that you can safely hand it off to a good good robot to buy on your behalf. And so we we see this as a new borderless way that commerce can happen in a very permission safe fashion.
Speaker 2:Yep. How how are you thinking about Agentic Commerce and stablecoins? A lot of, you know, there's a lot of commentary around stables and how they can be applied here. Oftentimes, the people that just sort of default assume that agents and Agentic software will use tokens, you know, whether they're stables or other tokens. They usually have crypto, you know, funds or or crypto companies.
Speaker 2:Right? So I've had maybe a more middle of the road view where I can imagine AgenTik commerce experiences leveraging stable coins. I can also imagine them leveraging cards and ACH and a and a bunch of other sort of forms of payments. So I'm assuming you spent a lot of time thinking about this and you guys have obviously been acquisitive recently with with Bridge and and Privy, as well.
Speaker 7:Yeah. This is one of the areas in which Stripe is very problem solving solving oriented and not, technology or particular technique religious.
Speaker 2:Yeah.
Speaker 7:We think that humans are going to have a variety of ways that they want to pay and hold money. Stablecoins is a phenomenal way for many people in the world to to hold funds and for businesses to move it across borders. And so we expect that stablecoins will be a very popular way for consumers and businesses to just interact with themselves. Then you have businesses who are also going to have you know, it they're they're gonna have a long adoption curve, when it comes to accepting and holding crypto assets. And then in some purchases, stablecoins might make sense between two parties that natively know how to interact in stablecoin.
Speaker 7:But often, it might be the case that Jordy has an Amex card, and the seller is expecting an ACH transaction. And we're sort of missing a universal way for all these types of currencies and rails to work together. Visa also announced a new way of of being able to to sort of hash your card and give it to an agent with this Visa agentic token where Stripe is one of the the first partners to implement it. And I think we're just gonna see this new proliferation of new ways that money can can transact between parties, and we're gonna need some type of sort of Babelfish translation service across all of them because if you're gonna pick one route, then you're gonna likely exclude many of the Asian humans and businesses in the world.
Speaker 2:That makes sense. How are you guys thinking of, not not to go too broad, but the business model of the internet agents, you know, change things. The internet today is heavily reliant on ad, you know, advertising. And if you have a bot, you know, just crawling a website, you know, you're or or even when you look at other other services. And so we've talked Ben Thompson had some good writing around just like what the future business model of the Internet could look like and potentially micro payments.
Speaker 2:But I think the takeaway from that our takeaway is, like, there's so many different stakeholders that would need to find some type of alignment. It's it's hard to see, like, the obvious path forward here.
Speaker 7:Yeah. I I think the our universal want from businesses is just more channels to reach their customers and be able to do so in more direct kinds of ways. And so if you go to, you know, a a SaaS software provider and you said, hi. You know, I I sort of have two choices for you. You can have this, very cool large budget for a one zero one billboard and kind of hope that at 85 miles an hour, developers, like, see your ad and then remember to implement it later?
Speaker 7:Or would you like them in situ as they're working to have agents mediate the purchase, recommend it, and be able to, like, integrate and accomplish your thing in five seconds right right inside their editor. Like, okay. Yes. I well, first of all, I'll do both. But, also, the second one sounds very nice because I'll be able to directly attribute, where it came from and be able to have a great, CAC for that.
Speaker 7:And, you know, the LTV should be even higher because the robot even integrated it directly. And so I think that we're gonna see new channels emerge for monetization, both usage based through MCP or other ways that businesses are gonna expose their APIs to agents, but also for transaction based referral fees, which which will supplement, affiliate. And then I think it'll be a new way for businesses to make sure that their agents can read their docs, can read their product SKUs, can have access to that information in a new permissioned way. I I really like the Carcappy topic I posted, last night where you basically said that if your docs involve a click, not good because agents want to act and not click and just only read. They want to start acting.
Speaker 7:And so that's why Stripe is, you know, if you go to the Stripe docs, it we really push like, hey. Here's our MCP where you can just just talk to the primary best way of integrating Stripe, and they can do it on your behalf rather than, you know, just reading or reading something from a three year old corpus.
Speaker 1:Interesting. Last question for me. We we we wanna move on and let you get back to your day. Stripe was famous early on for having this crazy kinda open culture around, emails that anyone from the entire organization could read. That seems like incredible foresight to the moment today because you don't have all this private information that, oh, well, do we train on that or not?
Speaker 1:You could very easily fine tune a model or do some sort of, you know, you know, embedding on the emails that are already deemed to be worthy of the entire organization reading them. Is that still part of the culture? Is there a tool if you join Stripe where you can get up to speed without needing to read every email, but you can kind of get the the Stripe way of doing x, y, or z? Talk to me about Stripe's culture.
Speaker 7:Stripe's, you know, has a has a a very serious writing culture where Yeah. Any decision I I've been a part of for the last seven years, I can really point to some Google Doc that has the pros, the cons, and the decisions, and as well as the, the the the sort of the email culture you mentioned where we just it's very commonplace at Stripe where if you write, you spoke to a customer or even after going on TBPN. Hi. I went on TBPN. You just cc a notes list, and now it's available for anyone who wants to subscribe to notes list.
Speaker 7:But one of the major subscribers to notes list now is agents. Interesting. And so if if I'm in Slack, we we have this really awesome bot called Trailbot, that's read the trail of everything and all the paper trail of everything that we've got this permission to it. And I can just say at Trailbot in any Slack room, and it has the context both of the Teams' Slack room I'm in, but also the whole corpus of of Stripe and all of our permissioned, wikis and documentation and internal internal tools. And it is it takes the first line defense of most questions immediately, and we actually have it to the point where it knows to jump in automatically without you even asking it.
Speaker 2:Very cool.
Speaker 7:And so I I find that, most of the time, we're able to just at Trailbot and answer a lot of questions. And then increasingly, these agent tools, which I think are gonna apply to commerce soon quickly too, they're not just read only. They're gonna start taking right actions and purchase actions. And for Stripe, internally, actions might be to roll back that deploy or to, you know, auto communicate to that customer because of a an NPS score under 10, which which we do often, hopefully not too often. But then in in the real world, if you wanna make some of these actions, you're gonna need to prove who you are, pay for it, make sure the merchant was able to accept that money, get the entitlement, and move on.
Speaker 1:Yeah. Even something as simple as like, you show up to a new company. Hey, there's this system over here that we're using and I don't have access. You might go to a wiki and ask, how do I get access? Now, you just ask and it just does it for you.
Speaker 2:It's so interesting to think about if if there's like some type of user flow where if somebody sends a Slack message Yeah. There's like a tiny delay built in and it gives like a bot an opportunity to like actually front run the question because it's like every message is gonna waste like you know Yeah.
Speaker 7:Ten minutes of Like said, need a new version of shadow band where you first get your question answered and
Speaker 4:then Yeah.
Speaker 3:Go back to Do
Speaker 2:you really wanna ask this question because it was answered here, here, here, here, here, and like here's our recommended action.
Speaker 1:It's like pro auto complete. It's amazing.
Speaker 2:It'd be interest Yeah. Slack's Slack's just become completely silent because everybody's like doing things and just immediately getting at.
Speaker 7:Those of us who have nerdily taken notes and made docs over over years, that is somewhat We're waiting for this moment. It was worth it. Yeah.
Speaker 1:We didn't know why. Yeah. Made made made fun of by some people for a long time, but it all came back. Well, well, thank you so much for stopping by.
Speaker 2:This was great, Jeff. Always welcome.
Speaker 1:Yeah. We'll we'll have you back soon. You can talk more.
Speaker 7:To y'all.
Speaker 1:It's great.
Speaker 2:Talk to you Bye. Let's give it up for Jeff.
Speaker 1:Next up, we have a friend from Handshake coming in. He was mentioning the information. We've been mentioned in the information. It's a bunch of information boys hanging out on the chat.
Speaker 2:We love the information.
Speaker 1:We love we love the information. Thanks so much for joining.
Speaker 2:Garrett Lord, the nominative determinism is insane. Yeah.
Speaker 9:I think we love something also in common to that saunaing. I'm a big big sauna guy.
Speaker 1:No way. Yeah. There we go. Yeah. Yeah.
Speaker 1:The sauna is important
Speaker 2:to to hear that when we moved into this new studio, we we don't have a
Speaker 1:good We don't have a good sauna set up, but we'll figure it out eventually.
Speaker 9:The cold plunge can be can fit nearby, though. I mean, that's there's still opportunities.
Speaker 1:Be good. Yeah. May maybe we gotta get in the cold plunge game. Anyway Just jumping in
Speaker 2:in full suit.
Speaker 1:Anyway, kick us off with a little introduction on the business. Obviously, it's in the news today. We covered a little bit about it earlier, but I'd love to get you to explain the business, a little bit of the history, and the positioning of the company.
Speaker 9:Yeah. For sure. So, I mean, the business started way back when I was in college. I started Handshake out of a personal pain that I faced Mhmm. In breaking in to find my first internship and first job.
Speaker 9:Interesting. I went to a no name school in the middle of nowhere called Michigan Tech.
Speaker 1:Yeah.
Speaker 2:It's awesome
Speaker 9:if you love to ski or love the cold, but if you wanted to break into Silicon Valley, nobody had really recruited there before. Fast forward to today, Handshake is the number one place that young people in America start, jump start, or restart their career. We're,
Speaker 1:like,
Speaker 9:kind of an 18 to 30 early career network. There's a million employers that use Handshake, so it's where the vast majority of employers recruit undergrads and interns and people after school. And then there's 18,000,000 students and young professionals who use the network, and we also power about 1,600 universities in the country.
Speaker 2:Mhmm.
Speaker 9:And the background, I think, that's important for right now in this very moment is about eighteen months ago, many of the Frontier Labs as well as the large annotation engine companies started reaching out to us with asking us, beating down the door, saying, like, do you have access to PhDs? Do you have access to master's students? And for us, that was incredible. I mean, we have 500,000 PhDs in the network. We have 3,000,000 master's students on the network.
Speaker 9:There's tens of millions of undergrads in the network. And we started serving these players with experts, really, as the world has evolved from training frontier models. It's moved from generalists
Speaker 1:Mhmm.
Speaker 9:Like drawing kind of boundary boxes around stop signs
Speaker 1:Yep.
Speaker 9:To today, experts. And experts are in law, finance, medicine, mathematics, physics, chemistry, biology. These labs really are hungry for reasoning data to help improve with human in the loop the actual, you know, frontier of what their models are capable of delivering. Get alone in the future, you talk about, like, tool use or trajectory. So they started reaching out to us and saying, do you have access to these PhDs and master's students?
Speaker 9:And we started providing we are the leading provider of all this talent. And what we really started to realize is that people weren't getting paid on time. They were really confused. They would go through training and kinda get dropped out of a leaky bucket. We heard from students that we're successful on it, that they they love the money.
Speaker 9:They love learning more about some of this AI tooling. They wanted to use AI tools in the classroom. They wanted to use it in their research. And so given that we have this huge supply and zero customer acquisition costs, we started building a human data business. Mhmm.
Speaker 9:And, really, in the construct of building that business, the focus is really around, like, how can you also think about evolving and automating a lot of the recruiting practices. Recruiting is still you know, it's sourcing. It's screening. It's scheduling. There's a lot that AI can bring to bear on that.
Speaker 9:And so we now, fast forward to today, in the last six months, have been working now with six of the Frontier Labs. We provide them tens of thousands of computer It's
Speaker 1:a lot of them. I didn't even know there were six. I thought there were only five. The big six. Count them up.
Speaker 1:You got them all.
Speaker 9:And we provide them with experts to help make their models more effective.
Speaker 1:Very cool. Talk to me about, like, what the how how are the Frontier Labs thinking about human data annotation and answer generation? It feels like we might be at the end of that story soon, or maybe we're shifting into more of a focus on the areas that are less verifiable, less like write the answer to an IMO level math problem and more in the biology and legal context where the models are falling behind? Like, where where are the pockets of value? Where's the most demand within the human data generation industry, and where do you see it going over the next couple of years?
Speaker 9:Yeah. So maybe I'll go, like, from the latter part of question to the first. Like, where are we seeing going over the next couple of years? It's definitely gonna evolve into audio. It's definitely gonna evolve into tool use.
Speaker 9:It's definitely gonna evolve into trajectories, and experts will be needed to provide data. Imagine almost like recording your screen as you're conducting a task. Maybe you're building a slide deck and doing it. You know, if you're an investor construct, like doing a DCF and doing competitive research, they want more data to be able to help, improve these models, especially just think about, like, agents, right, and step by step problem solving. As of where the puck is right now and where the puck will continue to be, if you talk to a lot of the frontier researchers, is they need expert data.
Speaker 9:And expert data is in basically every esoteric area of human knowledge. Mhmm. They want to you know, the the models have already kind of sucked up the entirety of books and YouTube and, you know, human knowledge. And what they really need is they need special data to be able to make and understand the step by step reasoning that's required in order to be able to to kinda fuel the future. And so if you think about academia, these PhDs, like, what is the definition of getting a PhD?
Speaker 9:The definition of getting a PhD is, like, pushing forward an area of research that nobody else has done before. As peer reviewed by your peers, that's how you get your doctorate. And so this kind of perpetually reoccurring stream of of PhD students and master's students are really valuable in this very moment. And it's also to zoom out to their experience, like, can make I don't know if remember when you were in school, but you can make, like, $23 an hour being a teacher assistant. You know, you could dive you could drive DoorDash.
Speaker 9:Yeah. And we're paying these students, like, 60, $70.80, a 100 plus dollars an hour. Wow. And they're also we can connect it to actually getting jobs. So we envision a world where, like, you get badgers on your profile, and there's, like, leaderboards by school.
Speaker 9:And we're actually I mean, what better way to articulate your skill than actually proving it by being able to break the model Mhmm. Or or by being able to provide the model feedback. And so we believe that we can help you get more jobs with the million employers in the network, help you build your professional reputation, and articulate your skills, all the while while while making, like, a $100 an hour when you want to. I mean, it's a it's a good job.
Speaker 1:Yeah.
Speaker 2:How do you think about financing Handshake going forward? I'm sure you're making, generating a lot of revenue. You're clearly paying a bunch of your your network out quite a lot. We were just learning about Surge AI earlier and what they were able to do while boot strapped. I imagine even in the last week you've had investors reach out trying to, you know, say, hey, scale's out of the game.
Speaker 2:You want a 100,000,000. You want a dance. But how are you thinking about the business going forward?
Speaker 9:Yeah. I mean, one of the the ways we think about this market is, like, you know, if you don't have an audience, there's no moat. Mhmm. What our competitors are doing is they're at some of these companies, they'll have hundreds of people who are recruiters sitting on top of platforms, sending messages on companies like Handshake or spending tens of millions of hours a month doing performance advertising, trying to acquire experts on Instagram. You can imagine if you're like a physics PhD and you get an ad on Instagram for a company you've never heard of before, claiming they could pay you a $100 an hour, it's kind of a jarring experience.
Speaker 9:And so because we built a decade of trust in adding a ton of value to these users' lives, we had no customer acquisition costs. And what that means is that we can pass along all those savings by paying contributors. We call them fellows. It's the move fellowship program. We can actually pay you more than any other vendor in the market.
Speaker 9:We can also pass along those savings to the Frontier Labs. So as you think about our overall p and l, like, our gross margin and ability to scale this business, considering, you know, the moat is the network that we built, we sit in an amazing position to, you know, to to grow extraordinarily quickly. And that's what we've been seeing. I mean, in the last, you know, month, we've grown by over three x. And, you know Oh.
Speaker 9:Like, That is seems like there there's a lot of demand continue to be out
Speaker 2:there.
Speaker 1:I can imagine. I had no idea it was that big, though. Let's go. Let's go. Three hits in the concert.
Speaker 1:Three x. That's incredible. Are there any, last question. We'll let you go. Are there any, like, weird areas that you think we'll see this type of human data generation pop up?
Speaker 1:I'm imagining, like, AI seems to be at, like, a 150 IQ. It can write code, and yet it can't, like, book me a flight. Do we need to take, like, flight, like, travel, what are they, travel agents and have them go through the workflow so that they don't get hung up on, should I sign up for the credit card, or do I want, you know, insurance on this flight so that we have a whole bunch of data specifically about that task. I'm just interested in this concept of, like, these economically valuable but highly niche tasks that don't seem to be we don't seem to be getting closer and closer and closer to, like, one shotting them with the current models. And I'm wondering if we're gonna see this long tail of different hyper specific business use cases like what we saw in SaaS where there would be, Hipmunk, just help you book flights better.
Speaker 1:Is there going to be a flow where there's a new start up that's doing AI agents for flight booking, and then they're coming to you for a ton of data generation around how to actually book the correct flight because it learns whether or not you're okay with, with a layover or how price sensitive you are. All the things that you would get from the interaction with a human flight, travel agent, is that something that you think we'll see, or or is that kind of just completely tangential?
Speaker 9:No. I think that's totally something we'll see. Interesting. What what you just described is, like, a trajectory called a a browser trajectory.
Speaker 1:Sure.
Speaker 9:And it's basically, like, you have a goal in mind.
Speaker 1:Yeah.
Speaker 9:And you you, you know, you have, like, a step by step
Speaker 10:Yeah.
Speaker 9:Kinda thoughts in your mind around how you accomplish that. And you navigate tools. You navigate the browser. Yep. You stitch together your own intuition to be able to accomplish that task.
Speaker 1:Yeah. You might look at your own calendar. When do I get off work? How look up how long it takes to get to the airport. It takes me a different amount of time to get to Burbank than LAX.
Speaker 1:What's the parking like? Like, there's so it it it's such a simple task because you think about, like, anyone can do that job for you, and yet to do it well is actually really hard.
Speaker 9:Totally. And you talk about just being able to talk to a model. Right?
Speaker 1:Like Totally.
Speaker 2:You don't
Speaker 9:even need to log in. Right? So you're gonna need audio data. You're need trajectory data. You're be able to interact with APIs.
Speaker 9:Humans, experts will be needed for the next several years to to be able to make that data happen
Speaker 1:Interesting.
Speaker 9:Nor to be able to power the frontier of where you wanna see it going.
Speaker 1:Well, that's exciting. I want to book a flight with an AI. It still hasn't happened. That's my own personal touring test. Hopefully, you can make it happen.
Speaker 1:But thank you so much for stopping by. This was fantastic.
Speaker 10:Appreciate your time.
Speaker 1:We'll talk
Speaker 2:to you Great to meet you. Cheers.
Speaker 1:Coming in next we have Tonet coming into the studio to the TVPN UltraDome.
Speaker 2:A massive round.
Speaker 1:Oh. Oh. We're gonna hit the gong again.
Speaker 6:Yeah. They'll let you
Speaker 3:hit it.
Speaker 1:The tenth time of the show. Always a good time.
Speaker 2:There he is. Welcome.
Speaker 5:You got the news
Speaker 1:for us. Hit us with an introduction. Hit us with the news. What's going on in your world?
Speaker 2:Think we might be muted. Donnie.
Speaker 1:Are you there? Can you hear us?
Speaker 2:Are you there?
Speaker 1:I'm I'm I'm itching to hit the gong for you. I hear there's gong worthy news.
Speaker 2:Are you there? I'm gonna send him an email. Okay. You are live.
Speaker 1:You are live on TVPN. Okay.
Speaker 2:We'll pull him off for a
Speaker 1:In the meantime, I will tell you about Wander. Find your
Speaker 2:happy place. Find your happy place.
Speaker 1:Can hit you with the gun. Book a Wander with inspiring reviews, hotel grade amenities, dreamy beds, top tier cleaning, and twenty four seven concierge service. It's a vacation home but better, folks. We told you about this.
Speaker 2:We don't
Speaker 1:realize We
Speaker 2:told you about audio.
Speaker 1:We told you about Polymarket. We told you about Linear. Did we do Vanta? Automate compliance, manage risk, prove trust continuously. Vanta's trust management platform takes the manual work out of your security and compliance process and replaces it with continuous automation, whether you're pursuing your first framework or managing a complex program.
Speaker 1:Let's hear it for vanta.com.
Speaker 2:Pewks. Give it up for Vanta.
Speaker 1:And if we got an extra minute. Is he here?
Speaker 2:Back. Welcome to the stream. We made it.
Speaker 1:How you doing?
Speaker 2:We made it. Fantastic.
Speaker 5:Sorry for the audio issues.
Speaker 1:Oh, no. It it was a pleasure. We got to do extra ads. So, you know, you're making my day.
Speaker 2:It's a dream. It's a dream.
Speaker 1:You're making my day.
Speaker 2:What's going on? Quick intro. You've had a big day. What's what's happening?
Speaker 8:Thanks for having me. We we we announced a $200,000,000 round. Woah. We generally
Speaker 1:That's fantastic. Buried the lead, Jordy.
Speaker 8:You guys gotta start selling those. I feel
Speaker 3:like we need one in
Speaker 8:our office. But Just clearly. Yeah. No. We're super excited.
Speaker 8:Congratulations. Obviously works in health care. You know, we we power AI workflows, everything from ambient to revenue cycle payments in in large hospitals.
Speaker 2:Oh, interesting. Give us give us a quick history of the company. Yeah. Yeah. It's not often I see a a 200 on 6 something billion.
Speaker 2:I haven't I hadn't heard of the company before, but I hadn't. Josh Browder connected us last night. Amazing. And so I'd love to hear your quick story Yeah. Kinda how you got here.
Speaker 1:History of the company. I wanna hear about the first customer too.
Speaker 8:For sure. So, I mean, Josh is great. I've known him since we were at Stanford together same year. And the, you know, the story behind Khamir is interesting because Khmer started as an incubation inside General Catalyst. It it you know, the best analogy I have is it is Hemant's Palantir, very focused on health care.
Speaker 8:I started a company while I was at Stanford called Avelis, which was focused on applying language models and computer vision in health care. We started as a blood diagnostics company and then eventually grew into this mid market SMB OS for for physicians. We merged the two companies about a year and a half ago or almost two years ago now, and then I took over as CEO with our management team. And so it's really you know, it's sort of a coming together of these of these two businesses. And, yeah, I mean, the company powers large hospitals, about 250,000 physicians and nurses.
Speaker 8:We power, the, you know, private practices here in California. That was our first first customer. It was someone that my cofounder, Deepika, literally walked up to and, you know, cold knocked on their door and then got them to use one of our first devices and and remote monitoring solutions.
Speaker 2:Amazing.
Speaker 8:And, yeah, that's that's the the the the quick quick story.
Speaker 1:I got a bunch of questions. We need to It is. I I wanna kind of, like, contextualize this around the broader General Catalyst discussion because there was news, I think, just today that Ohio authorities approved the first ever purchase of a US hospital by a venture capital firm. That's General Catalyst's bid to acquire Summa Health, a hospital system in Akron with over 20 facilities. And I'd love for you to I'm sure you've studied this.
Speaker 1:What is going on there? And then how is that is there any sort of synergy across the portfolio? Venture, General Catalyst has has had, like, a very differentiated strategy there, but I haven't had the chance to dig into it. So I'd love to get you to contextualize it, and then we can go into, how this links to your business again.
Speaker 8:So the Summa transaction is super interesting. It is a venture capital firm buying a health system, transforming it, and Premier is obviously a big part of that. We're serving as the office of the CTO, so our engineers are forward deployed. Wow. We work hand in hand with the Summa IT teams.
Speaker 8:We've been working with with with the revenue cycle leaders, the clinical leaders, and it's a really special system. I mean, it's in Akron. If I'm not wrong, it's where LeBron James was born. Literally, the the hospital itself. And
Speaker 2:Many people have been calling you the LeBron James of health care AI.
Speaker 3:Now we gotta put that out there.
Speaker 2:Yeah. I mean, I I maybe have been the first person to say it.
Speaker 1:You might have coined it here,
Speaker 2:but, you many I'll
Speaker 1:I'll say it right now. You're the Lebron James of health care. Yeah. Now many people say not just one.
Speaker 2:Two as two as many Yeah. In our book.
Speaker 8:We're just gonna make that a thing now.
Speaker 1:Fantastic.
Speaker 8:It it it's it's remarkable because running a health system is super hard. It is a 1% to 3% operating margin business. Most of them go out of business. And think what what General Catalyst believes in is language models and technology can transform the operating margin and also lead to better care. So it's not a PE, you know, cut and and and and juice play.
Speaker 8:It's it really is an an investment.
Speaker 2:That's awesome. Talk about Komure's overall product strategy. You guys have a number of different products. It seems like a very different, you know, that we we've talked with founders and covered companies that come on and and just wanna own one, you know, one key area. But healthcare feels like somewhere where if you can get embedded with the set of customers, you can, you know, know, more, you know, rapidly kind of add add products to to the platform?
Speaker 2:So I I'd love to understand the the product strategy.
Speaker 8:We we really look up to businesses like Rippling and and Ram. I was gonna say. Mhmm. Where, you know, there's this concept of you you enter with a wedge. And in our case, that wedge is either Ambient AI, which is a tool that helps a physician document and really automate the revenue cycle of their appointment, generates the claim automatically.
Speaker 8:And then the back office, which is all and when you walk when you walk into a hospital, there are tens of thousands of people at large health systems whose sole job is fill out claims, call up insurance companies, fight denials, fill out new forms. All of that's going away with LLMs. And our belief is that if you do that as a point solution as, like, a single, you know, little part of the part of the solution, you might get some initial usage. But eventually, the EMRs like Epic or companies such as ourselves will just eat you. And, you have to be that compound startup from the get go.
Speaker 8:And I think payments is a really interesting vector to deploy software. Ramp has shown it where you get into the the transaction suite, and then you build a whole bunch of tools for the CFO's office. We're trying to do the same for, health systems CIO and and and CFO.
Speaker 1:Can you tell me a little bit of the the history of the health care industry broadly and how I I I know that there was, like, this kind of catalyst around Obamacare. I remember talking to Jonathan Bush, the founder of Athena Health, about electronic health records mandates. And there's been a number of changes kind of at the federal level that have kind of opened up different pockets of opportunity. Like, what is the story that you tell about the recent history of health care in America?
Speaker 8:I think it's it's fascinating. The nineties, physicians had amazing lives. I mean, they they drove Porsches. They had work life balance. They had personal relationships with their
Speaker 1:Let's hear it for Porsches. We'd love to hear that.
Speaker 2:Let's get back to Porsches.
Speaker 1:We need a
Speaker 2:return. We need a return.
Speaker 4:And and, you
Speaker 3:know, all all
Speaker 8:in all, physio patients got great experiences too because because of that personal relationship. Yeah. And then, you know, the admin work work tax just increased. Everything from insurance to filling out an EMR, digitization came in the February with Obamacare and Meaningful Use. And Mhmm.
Speaker 8:Really EMRs proliferated. And Jonathan Bush and and Judy and, you know, all these people are legends in the industry because they built Athena, a $20,000,000,000 company, Epic, probably a $100,000,000,000 company now Yeah. On the backs of that very quietly and under the radar from from most of tech. I think the theme in the story of today is labor is turning into software. And where is most white collar labor in America?
Speaker 8:It's in health care. Where, like, where where are the majority of administrators sitting behind a computer clicking on forms? It's in health care. And we believe that the EMR will be transformed. We also believe that the labor stack of health care will be transformed, and it'll create more operating cash flow for hospital owners.
Speaker 1:Is that narrative of the, like, the the administrative ratio or the administrative load increasing. Is that similar to what happened in in academia? Because I I remember seeing these charts of, like, the ratio of professor everyone loves the idea of, like, a high functioning university with a lot of professors teaching students and a great ratio there. Everyone's a little bit more skeptical about, like, wait. Why do we have five times as many people to add admin?
Speaker 1:Is that the same thing that's going on in health care? And and kind of what was the underlying driver of that? Was it just regulation or or lack of tools? Did it come from?
Speaker 8:I I think it's very similar. What I will say is I think in health care, it bred more out of necessity. And in academia, it just kinda happened. Mhmm. In in in health care, there's this game of attrition between the insurance company and the provider.
Speaker 8:And you're they're making it a little harder every month, every year to get an approval on a claim. And as a result, the health system needs to add a couple more people, in order to fight those claims. And then it just kinda built up into this arms race. And I think the insurers kinda carried the power after Obamacare. Like, the when you look at UnitedHealth's market cap, I mean, it's, what is it?
Speaker 8:Like, a 12 x since Obamacare got passed? It's it's quite shocking. And the power dynamic, I think, will shift again back in the favor of physicians and hospitals because of LLMs and because of what you can now automate.
Speaker 1:Yeah. It was kind of just like the the game theoretic Nash equilibrium was like hire a lot of a lot of admin staff.
Speaker 8:Interesting. There was no other option.
Speaker 3:Yeah.
Speaker 2:Yeah. Talk about your personal ambition and the team's ambition. You're a $6,000,000,000 company now. Seems like, you know, it's cliche, but it the way you're talking, it feels like you're just getting started.
Speaker 1:Is the job finished?
Speaker 2:Yeah. Yeah. It sounds like the job's not finished. I don't wanna put words in your mouth.
Speaker 8:Job's not finished. We're look. I I think when you walk into a health care practice, the inefficiency is shocking. Yeah. And it's and the the positive intent from the physicians and the nurses and the caregivers themselves is all there.
Speaker 8:And I think all it takes is for a company like ourselves to come in and try to nuke that work tax. So our ambition is, look. We're we're gonna come after the EMRs. We're gonna come after the payers or revenue cycle businesses. This is a $4,000,000,000,000 industry you can build for a very long time.
Speaker 8:What done looks like is when you walk into a physician's practice, scheduling intake insurance, like, all handled. There's no filling out a little clipboard of the same information again and again. The appointment happens, and instantly, the doctor is paid out. There's no reason we can't have instant adjudication instead of, you know, waiting thirty, forty five days. But it's gonna require a system overhaul, like new payment rails to go do that.
Speaker 8:And that's really what what's at the heart of what Komir is building.
Speaker 2:Awesome. Well, this is super exciting. I'm glad you're doing what you're doing. And you are new or you're our new health care expert and correspondent. So expect to call.
Speaker 4:To And LeBron
Speaker 2:James. LeBron of EMRs.
Speaker 1:Yes. Alright. According coach, of Congratulations
Speaker 2:on the milestone. Hope hope to have you on again soon.
Speaker 1:We'll talk to you soon. Cheers. Have a good one.
Speaker 2:Should we do some timeline?
Speaker 1:Fun show. Fun show. Yeah. We definitely should. We gotta talk about Sam Lessons, Oracle versus Salesforce.
Speaker 1:He's getting in he's getting in hot water.
Speaker 2:You got it in hot water.
Speaker 1:The timelines and turmoil. We love Sam Lesson on this show. He posted a screenshot. He says, Oracle I
Speaker 2:will defend big tech. I will defend
Speaker 1:Sam Lesson. Oracle is two x Salesforce, but Ellison is worth 25 x Benioff what this sale says about the limitations of the SaaS business model. He said, he had a fun riff yesterday with the slow partners on this. Oracle is obviously crushing it. But if you take a today snapshot, basically, market cap of Oracle is two x Salesforce, 500,000,000,000 versus 250,000,000,000.
Speaker 1:Meanwhile, according to previously directional at best bet data, Benioff's net worth is one twenty fifth that of Larry Ellison's 10,000,000,000 versus 250,000,000,000. What do you learn from that? What lessons do you draw? I like this. The revealed preference.
Speaker 1:For founders and companies, the old licensing model is better than SaaS. That's interesting.
Speaker 2:Imagine having
Speaker 1:Hot take.
Speaker 2:10,000,000,000 and just getting little bro ed by Larry. He has a sort of Lil Lil Bro ing effect
Speaker 1:on most There's an amazing story about a little bro a famous Lil Bro ing where Phil Knight of Nike was worth something like 10 on the order of like $10,000,000,000 and he was in like maybe Sun Valley or something going to a movie and he runs into Bill Gates and Warren Buffett who are just going out to a movie and they're they're both worth 10 times him. And he's just like, yeah. Had this weird awkward moment where I was like nervous to meet them for the first time in a long time because typically he's like the most successful businessman he runs into all day. Right? But he was just like, yeah.
Speaker 1:He in his book Shoe Dog, it's a fantastic book. Shoe Dog is a great is a great book. Great book. And he talks about like all the weird effects of like having immense wealth. How like his wife would like hoard immense amounts of like paper towels just because they're like money is no object like what should we with this?
Speaker 1:I get
Speaker 2:Think of
Speaker 1:the castle. Got a lot of paper towels. And they had like figure out, okay, this is like some weird psychological thing that's going on in my brain. Like, I don't actually need paper towels. The fact that money's no issue doesn't really matter.
Speaker 2:Yeah. People like to talk about, you know, you're the you're the you're the whatever the the average of your five friends. Yeah. And it's like, well if you want there, there should be some similar law of like your growth rate is like should be is like tied to how often you're a little bro'd.
Speaker 1:Yep. You know? Yeah. Yeah. Never get a little bro'd.
Speaker 2:No. No. You want to be
Speaker 1:a little Yeah. That's true. You can be on the upward swing. That's right. If you're not getting little bro'd enough, you're not on an upwards upward trajectory.
Speaker 1:This is good. Yeah. Yeah. This is good. I've I've been in that situation before.
Speaker 2:Anyways, we could cover what Sam said but I think we can just skip to
Speaker 1:We're also gonna have him back on the show.
Speaker 2:We're gonna have him back on the show. He's a regular. We're gonna skip to Miles. He says, wrong take. Ellison is much richer because he didn't sell shares and has steadily been buying back 2% of the company every year for thirty years.
Speaker 2:He's increased his ownership from 17% to 40%. Wow. It's such an incredible story. Founders complain about dilution. Yeah.
Speaker 2:Oh, you got diluted? Yeah. I'm sorry. Don't you just buy back shares every single year for decades? If Ellison keeps doing this, he could very well own 150% of his capital at some point.
Speaker 1:That's the future for OpenAI. OpenAI just becomes the agentic organization. It just buys back so many shares that it eventually owns itself. Yeah. That's the real goal.
Speaker 2:Anyways, Miles says, meanwhile CRM, aka Salesforce made a lot of dilutive acquisitions and Benioff steadily sells his shares yearly. He doesn't sell them yearly.
Speaker 1:Sells Daily. Daily. 2,000,000 daily. Buko
Speaker 2:capital
Speaker 1:Yeah.
Speaker 2:People say, oh, liquidity events, you know, they're they're few and far between.
Speaker 1:Not even daily.
Speaker 2:Not happening every day. Not for Benioff. Daily liquidity.
Speaker 1:It's pretty good.
Speaker 2:You know, the the real, you know, another how much does he pay Matthew McConaughey to just hang out?
Speaker 1:That's gotta be pricey.
Speaker 2:I think it's only like 10,000,000 a year. So it's like
Speaker 1:Yeah. Couple Super Bowl ads. Not bad. So Sam responds to the the hate. He says, since a lot of folks are making the same comment about buybacks versus sales strategies, that is at best the noob answer.
Speaker 1:If you're smart, you understand why they have different paths. And the answer is path dependency from business model quality. Take a two zero one level class. And then boohoo capital blow quote tweets that and says, it's a timeline in turmoil. Wrong again.
Speaker 1:CRM has executed poorly. They've diluted shareholders with bad acquisitions. They have 75,000 employees who they give excessive stock based compensation to. They let hubs scale up in their face, HubSpot. They've diluted versus shrunk their share count, versus the other companies, Adobe, that eats shares.
Speaker 1:Investors don't trust him. If Benioff held and cared about shareholders, it would be a closer call. He doesn't care. It's not about the business model. Well, you'll love to see some some timeline in turmoil.
Speaker 1:Very, very fun. In other news, Sheila Monat has the story about Telegram's founder Pavel Durov. Consistent feature on the, tech bro drip account.
Speaker 2:Everybody says they're pronatalist until they until you ask how many how many, children, you know, have you fathered through sperm donation?
Speaker 1:Apparently, he has fathered over 100 kids via sperm donation, and he is worth $14,000,000,000. And he says he'll leave his fortune to all of them with no difference between his six kids conceived naturally versus via, compared to the 100 via sperm donation. So every one of them is gonna get a $140,000,000 just to kick off fundraising. Just start investing that. You got your family office on day one if you're one of Pavel Durov's kids.
Speaker 1:Pretty remarkable.
Speaker 2:Yeah. Single LP. It's kind of a good dynamic.
Speaker 1:Yeah. I wonder how he's gonna get liquidity for for Telegram at some point. Because you get a bunch of Telegram shares. It's kind of like this this difficult beast to wrangle. Yeah.
Speaker 1:I mean, I guess
Speaker 2:you take it public and
Speaker 1:at some point and and get liquidity out of that. I don't know. I mean, it also just prints money. So even if it's like 14,000,000,000, like, you could just get, like, get a stake in the in the distributions because it's making money.
Speaker 2:I think he kind of figured out life and wanted to make his life basically a 100 times more complicated. Hundred years. You know, having having this type of dynamic and you know, not just with his many Yeah. Children that that he helped conceive directly Yeah. Versus the 100 others.
Speaker 2:So had to one up Elon, had the little bro Elon
Speaker 1:He did little bro. Little and Elon's commented on this too. He was like, oh, I got rookie numbers. Genghis Genghis Khan over there is really taking over the world.
Speaker 2:It's Genghis Khan of encrypted messaging.
Speaker 1:Yep. It's very, very odd. Only CFO says finance department out drinks sales. Feels like sales is inviting finance to the party so they can stick them with the bill. This is And this
Speaker 2:is This is the data you can only get from rampramp.com/data.
Speaker 1:Apparently apparently, finance marketing sales teams lead in alcohol spend. Alcohol as a share of business meal spend.
Speaker 2:Not in that order. So marketing is absolutely dominating.
Speaker 1:Dominating. Twenty percent twenty
Speaker 2:19% of all spend on alcohol.
Speaker 1:No. No. It's 19% of business meals are alcohol. So if they go out and they're getting a a $80 worth of food, they're adding on $19 or some or $81 of food, $19 of drinks. That's the idea.
Speaker 1:Alcohol share. Finance, they're getting, you know, $84 of food, $16 of booze.
Speaker 2:Marketing is drinking sales under the table.
Speaker 1:Yes. Yes.
Speaker 2:Narrative violations.
Speaker 1:IT. In the in the in the tail end there, 9.7%. Many Huberman devotees in the IT department, apparently.
Speaker 2:Yeah. Not a not a power lunch, you know, category.
Speaker 1:No. But the three martini lunch will make it back for the for the tech teams. Should we go to this story about the vibe coder who sold his business to Wix for $80,000,000? It's only a six month old company, and there's no external funding. A $189 198,000 a $189,000 in profit in May.
Speaker 1:Bryce Roberts, this is just the beginning. There's gonna be more stuff like this. I think this is pretty cool. Base forty four only employs six people, hasn't raised any any external funding. The thirty one year old built a viral app AI app maker as a side project.
Speaker 1:So you go in there. You design an app. Obviously, plays very well with Wix, which is in the, website building business, but he flipped it for $80,000,000. And he's post economic now. Congratulations.
Speaker 2:Yeah. When I saw this headline, I was I was confused. I was like, okay. So he just vibe coded some something and sold it, but it it is a tool to
Speaker 1:do a vibe coding tool.
Speaker 2:Yeah. Trusted by over 250,000 builders worldwide and nice quick flip. Amazing. He's basically getting a similar similar outcome to, you know, a founder that sells their company for a billion dollars but you know, goes through a bunch of different financing Yeah.
Speaker 1:We're kind of like a mid tier AI researcher these Yeah.
Speaker 2:Starting, you know, like
Speaker 1:Starting out. Yes. Starting out. In other news, John Carmack is absolutely jacked. This is fantastic news.
Speaker 1:Yaxine has the news. He's looking he's looking very built. But John Carmack chimes in. He says a chunk of this is just his wife dressing me in tighter shirts, but I did put on several pounds of muscle this year after switching my random grab bag of vitamins and supplements over to Brian Johnson's blueprint system. Let's hear it for Brian Johnson.
Speaker 1:Really making a difference in the technology world. I was probably not getting enough protein to take advantage of the exercise I was doing. I've always been roughly upper quintile for fitness. Let's go. Regular exercise, but not at the level of serious athletes that most offices tend to have a few of.
Speaker 1:And now he's looking built. Palmer, lucky chimed in. It's a great great day on the timeline. Let's check-in with Tyler. Close out the show.
Speaker 2:I was gonna check-in with this Polymarket. Okay. Polymarket. Bill Chamath launched SPAC
Speaker 1:Oh yeah. 2020 was supposed to this.
Speaker 2:It is up to 70%.
Speaker 1:It's up to 70%. It was 33% when I posted it this morning. Wow. That's big news.
Speaker 2:Amazing. It partially because he came out and he said What did he say? He said 50 eight, he he asked yesterday, should I launch us back? 58,000 people voted. 71% said no.
Speaker 2:He said
Speaker 1:I'm doing anything.
Speaker 2:I hope everyone that voted no feels seen now on to business. I got calls from many Wall Street and crypto titans yesterday. They all want in and their vote matters a lot to me so I will probably do it.
Speaker 1:I love it.
Speaker 2:Maybe this time it will go better. Who knows? The risks are clear though. The last time wasn't a success by any means. I will include this poll and the community in every SEC filing possible.
Speaker 1:That's awesome.
Speaker 2:You'll make an excellent disclosure about the risks and is not short of irony. What kind of company do you guys want? Crying in the casino.
Speaker 1:So Let's go, Chamathi.
Speaker 2:People are absolutely fuming at them.
Speaker 1:Comments, I'm sure.
Speaker 2:But honestly
Speaker 1:Get after it.
Speaker 2:Pretty fair.
Speaker 1:Everyone knows it's a tough coach. He's gonna play by the rules, you
Speaker 3:know?
Speaker 2:And I think I think at the end of the day, it's it's very like, I I look at the next Jamoth's back as like, it it totally will, like it probably will pop. It'll probably, it'll get a lot of attention. It might turn into a meme stock. Right? I I will be interested to see what kind of target he
Speaker 1:I'm 100% excited to follow the story. Yeah. It's gonna be fascinating. Anyway, let's check-in with Tyler and then close out the show. Tyler, I have a question for you.
Speaker 1:Can you guess a number between one and fifty?
Speaker 5:Like a random number?
Speaker 1:Yeah. Random number between one and fifty.
Speaker 5:27.
Speaker 1:27. Are you an LLM? Did you see this? Every single model, they all guess 27 when you ask them between a number between one and fifty. Chad GPT, Claude, Perplexity, Meta, they all guess 27
Speaker 2:We gotta get a World Coin Orb in here to be able to prove that Tyler's not in fact
Speaker 1:Yes. We do. AI. We we he might just be a deep egg. Final review.
Speaker 1:What what did you get done this show? I mean, did you keep playing with mid journey? Were you doing something else? What what's been going on the last couple hours?
Speaker 5:Yeah. I think I I just sent another video.
Speaker 1:Oh, no.
Speaker 5:You haven't been pretty productive. You can watch this.
Speaker 1:Let's see. I like the thing you're gonna crosses behind.
Speaker 2:What's he doing? Did he just, oh, he just took my spot.
Speaker 1:Oh, he took your
Speaker 2:spot. Okay. Wow.
Speaker 1:Yeah. Oh, he comes in with a paper. Breaking news.
Speaker 2:Breaking news.
Speaker 1:Breaking breaking news gorilla.
Speaker 2:The breaking news. We need to get
Speaker 1:you a a breaking news gorilla outfit. And and if he has breaking news, you can print it out, come sit down, take your seat. That'd be amazing. That's good. That's good.
Speaker 1:Are there any others or we're closing it out?
Speaker 5:Yeah. I think that's it.
Speaker 2:That's it. That's it. Well, good work.
Speaker 1:Good work today. Feel like production
Speaker 2:team is
Speaker 1:laughing like they have some other
Speaker 2:Another productive day.
Speaker 1:Too scary for for our audience.
Speaker 2:I saw what I I saw one get sent in the chat and it and it just was really
Speaker 1:It was bad.
Speaker 2:Scary bad looking.
Speaker 1:Well, we will be back tomorrow. We have a great show for you folks. Leave us five stars in Apple Podcasts and Spotify. And thank you for Thanks for
Speaker 2:being here with us.
Speaker 1:Fantastic show.
Speaker 2:Have a great evening.
Speaker 1:Goodbye. We love you.