Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.
You're watching TPPN. It is Tuesday, 04/08/2025. We are live from the Temple Of Technology. The fortress
Speaker 2:of finance. The capital of capital.
Speaker 1:And today, we are particularly in the temple of technology because we're doing a deep dive on Meta's open source AI strategy talking
Speaker 2:That's right.
Speaker 1:You through the history of LAMA and how they built out that LLM and all what their strategy is with it. Also, all the teams behind it. There's some interesting
Speaker 2:And some of their latest
Speaker 1:dynamics. Yeah.
Speaker 2:Benchmarks hate this one simple trick.
Speaker 1:Saturate your models, apparently. That's the trick. We'll we will be as fair and balanced as we can about it.
Speaker 2:Now we're excited about it.
Speaker 1:There's a lot of yeah. I mean, there's a lot to like. There's a lot to be skeptical of, and there's a lot of uncertainty. But we are gonna bring on a guest, a surprise guest, Jeff Huber, who was not in the announcement post, but will be joining at 11:30 to break it all down. But let's start with AI at Meta on X.
Speaker 1:They post today is the start of a new era of natively multimodal AI innovation. Today, we're announcing the first Llama four models. This dropped on the weekend. Shout out to Zuck
Speaker 3:for crying to
Speaker 1:me constantly. You love it. Llama four scout and Llama four maverick are most advanced models yet, and they're and the best in the class for multimodality. Always funny when you say our most advanced model, our most it's our most powerful iPhone ever because if it was less powerful, you wouldn't release it. Like, you everything you do should be superlative in the context of your own business.
Speaker 1:Anyway, it clearly did mark a step forward, and they nailed some other superlative news because they had the industry leading context window of 10,000,000 tokens. Of course, that means how much how much information can you stuff in the into a prompt and still get reliable results out. Google took it up to 1,000,000 tokens, and that was amazing. You could upload a two hour podcast. You could upload an entire TBPN episode, start asking questions, and it would be able to find things.
Speaker 1:It was pretty good. At least Gemini saw some demos of them picking needles out of the haystack. Hey. I changed one word in this entire book. Can you go find it?
Speaker 1:And it would do it. Very, very cool. Bit of a debate online about what that means for RAG retrieval augmented generation, which is where you load up a bunch a bunch of documents into something that the LLM can kinda process through. And there's debate now about is does does large context windows, do they kill RAG? Well, we're having Jeffrey Huber on from Chroma.
Speaker 1:He's a RAG expert and has built vector databases, and he will defend his position. And I think he'll have some interesting takes. They also launched Lama four maverick. That's a 17,000,000,000 active parameter model with a 28 experts. So these are mixture of experts models.
Speaker 1:So there's a little bit of internal routing to find, you know, what what what neurons in the LLM need to be activated to go after math or writing or poetry or whatever. And then they also have image class, best in class image grounding with the ability to align user prompts with relevant visual concepts and anchor model responses to regions in the image. And so, a lot of these models are going multimodal so they can deal with images and text, and that's very important because, obviously, we as humans can process both images and text. So you wanna make something that's human level or AGI or even close to it, you gotta be able to do everything the human can do. And so this is where it gets gets controversial.
Speaker 1:They they said we have unparalleled performance to cost ratio with a chat version scoring Elo 1,417 on LM LM Arena. That's where all the chatbots battle, and humans score which ones they like. Very controversial. People are saying that the results don't tell the full story. So we're gonna dig into that.
Speaker 1:But first, I want to go to
Speaker 2:It was it was saying that the the vibes were way off?
Speaker 1:I mean, Rune was talking trash about llama when it dropped initially. Of course, he's pretty aligned with OpenAI. I think everyone knows that at this point. But Rune was saying, everyone was like, oh, OpenAI is cooked because Yeah. Llama's now open source.
Speaker 1:And he he tweeted just like, have you talked to that thing? Like, and and it was this idea of, like Yeah. Who cares about, like, whether or not they have the same number of parameters or it's open source? It's like, do you have a good experience actually chatting with it? And that's where the vibes are off.
Speaker 1:But there has been more commentary about the vibes, and we'll we will get into that. But first, I wanna hear from Zuck himself. We have a clip from Zuckerberg explaining Meta's open source AI strategy. So we're gonna play that, and then we'll use that as the backbone of this analysis, to really kick off how he's thinking about open source AI at Meta because it did kinda come out of left field With VR, they're very close source, clearly going towards let's build a platform. Let's lock everyone in, but took a very different tact.
Speaker 1:And and to be fair, some of the open some of the VR work is open source, But and and they do wanna build an ecosystem, but they're being much more aggressive about open sourcing in AI, and there's a lot of good reasons for that. Ben Thompson has broken that down. And when we've seen Ben Thompson's made a very convincing argument for their strategy.
Speaker 2:Basically, open sourcing this and just making it widely available for free Yep. Will benefit their ad business long term, which is the the real, you know, cash engine.
Speaker 1:Bad day to not be commoditizing your compliments.
Speaker 2:That's right.
Speaker 1:You always wanna be commoditizing your But let's hear it from Zuck himself. Let's do it. Do we have
Speaker 4:My view is that open source is a really important ingredient to having a positive AI future. I know there are all these awesome things that AI is gonna bring, in terms of productivity gains and creativity enhancements for people, and hopefully, it'll help us with research and things like that. But I think open source is an important part of how we make sure that this benefits everyone and is accessible to everyone. It isn't something that's just locked into a handful of big companies. At the same time, I actually think that open source is going to end up being the safer and more secure way to develop AI.
Speaker 4:I know that there's sort of a debate today about is open source safe, and I actually take the different position on it. It's not only do I think it's safe, I think it's safer than the alternative of closed development. And a realistic aim that we should hope for is is that we use open source to basically develop the leading and most robust ecosystem in the world, And that we have an expectation that our companies work closely with our government and allied governments on national security. So that way, our governments can persistently just be integrating the latest technology and have a, you know, whatever it is, six month advantage, eight month advantage on our adversaries. And I think that that's you know, I don't know that that in this world, you you get a ten year permanent advantage, but I think a a kind of perpetual lead actually will make us more safe, in one where we're leading than the model that others are advocating, which is, okay.
Speaker 4:You have a small number of closed labs. They lock down development. We probably risk being in the lead at all. Like, probably the other governments are are are are getting access to it. It's that that's my view.
Speaker 4:I I actually think on on both these things, spreading prosperity for for, more evenly around the world, making it so that there can be more progress and on safety, I think we're basically just gonna find over time that open source leads. Look. There there are gonna be issues. Right? It's like we'll have to mitigate the issues.
Speaker 4:We're gonna test everything rigorously. We do. We work with governments on all this stuff. We'll continue doing that. But that's my view of of kind of where the equilibrium, I think will settle out given what I know today.
Speaker 1:I think it's fascinating looking back at that historical clip and seeing how incredibly front and center AI safety was. And then you look at the llama four announcement today and no one's saying, oh, well, like llama four is, like, not safe or we should be having a safety debate. It's all about the benchmarks. It's like, it's not superhuman enough. It's not aggressive enough.
Speaker 1:Yeah. And and so we've kind of blown past that. But, again, I do think there is a good AI safety argument to be had about open source. And I think it's played out kind of like he said. Like, it's kind of good that, you know, at the very least, it's like when you open source something like Llama, it it very easily can get in the hands of a let's let's not go to paper clipping.
Speaker 1:Let's just go to, you know, fraud on your grandma. Right? Yeah. Sending spam texts that are LLM generated so they're a little bit more convincing. We haven't really seen an epidemic of that yet, and there's been just as much economic force towards
Speaker 2:Yep.
Speaker 1:Preventing that type of spam and scams that the open source, like, the net impact, I think, has still been positive. You get you get plenty of plenty of small companies who or kids who yeah. I have a GPU rig that I used to game on that was my Christmas present, and now I can fine tune Lama and make some app Yeah. Or deploy it really cheaply, and that's a net benefit. And the scammers aren't really getting away.
Speaker 1:Like, I keep going back to the election, and it's very hard to make the argument that AI swung the election.
Speaker 2:Yep.
Speaker 1:Right? Even though, I don't know, both parties would probably have used I think
Speaker 2:accused Zuck has been accused of doing that, you know, or being With
Speaker 1:the twenty sixteen election.
Speaker 5:Right? That.
Speaker 2:Yeah. Yeah.
Speaker 1:But but it's much harder to make, which is weird because it's a work.
Speaker 2:I I I love that he takes the position generally that he's like, you know, it would just be I really think we should avoid having like a few big companies Yeah. Control this like very important technology Except
Speaker 1:what in social networking.
Speaker 2:Meanwhile, Meta controlling, you know, 20% of The US digital ad spend, you know, not even including, you know, social networking, which I'm sure Yeah. Is significantly higher.
Speaker 1:And also just this idea of, like, if you want to put something on the Internet, increasingly, like like this idea of, like, the open Internet where everyone has a website and they all have their own style guides, it's all, like, this chaotic, you know, like, what what do they call it? Web one point o, Web two point o, something. I don't know. The that that independent web has really, like, disappeared because of Meta's power over But but, you know, I don't know. It it it still makes sense, and I think it makes more sense from a strategy thing.
Speaker 1:He's kind of making some arguments that sound good in theory, but maybe aren't fully motivated. They're more motivated by just his business needs. And I think he's a dog. Wants to win. Are real.
Speaker 1:Yeah. And I think they're valid valid.
Speaker 2:No. There there's there's two things that can happen simultaneously. One, open sourcing llama and allowing anybody to build on top of it Yep. And do what they want with it is Yep. Is a net benefit for the world.
Speaker 2:Yep. It also very clearly is highly strategic. Yep. He's doing it because he wants Meta to be a much bigger company in ten years from today than it is, you know, now.
Speaker 1:Yeah. No. A %. But if you are trying to look like Mark Zuckerberg, you gotta get on Bezel. Go to getbezel.com.
Speaker 1:Shop over 24,500 luxury watches, fully authenticated in house by Bezel's team of expert. You know he's got a Cubitus. He's got a Patek. He's got a Bijouren.
Speaker 2:Pretty
Speaker 1:much. He's got everything. He's got it all. And now is your chance to to catch up to Zuck by going big on Bezel. So download the app too.
Speaker 1:The Bezel app is fantastic. Highly recommend it. You can scroll, filter, find what you like, create a little wish list, and then pretty soon start knocking those down as you send wires off to get Holy Trinity watches on
Speaker 2:I'm gonna have we're gonna have Quaid on again this week to talk about the watch industry's reaction to the tariffs Yep. Which has not been, you know, Switzerland specifically has been targeted.
Speaker 1:But moving on to the reaction to Lama four, the headlines were generally glowing. Two mediums two new medium sized mixture of expert open models that score well, and a third, two trillion parameter behemoth is promised. So they didn't launch that yet. They're still training it. That should be the largest open model ever released.
Speaker 1:And, again, openness is a spectrum here. There's open weights where you can fine tune it. There's o fully open source where you can actually see the code and all the changes. There's open data. You can have the data open source that they trained it on.
Speaker 1:And, also, the this is open source, but they the all the big tech companies are doing this funny thing where they're like, hey. Anyone can use this, really anyone, except for if you have over a billion over, like, $500,000,000,000 in revenue or something like that. And it's basically just to exclude the other big tech companies.
Speaker 6:Yeah.
Speaker 1:And they don't care. They they set it, like, right whatever Snapchat's revenue is or user base is. It they're like, if it if they have 10% less, you can use it. Yeah. Which is, like, honestly amazing for a lot of entrepreneurs.
Speaker 1:So it's cool, but it's very funny that they're like, you know I I I I would not actually gonna help my competitors here.
Speaker 2:Evan might kick the bots off and be like, oh, fair game now.
Speaker 1:But it's funny because, like, the the history of open source has been like MIT license. You can even just take this code and just go and sell it immediately. And if you can get someone to buy it, you can make money off of it. Now they are in there there's a whole variety of open source licenses. But, anyway, Meta
Speaker 7:just
Speaker 1:got a huge jump. This is from LM Arena from December to 1417, and that puts them allegedly higher than OpenAI, higher than XAI, but it was hotly debated as we'll get into. And so there were a couple of takeaways here. Llama four, released on Saturday. The the the blog post that so they so they didn't launch, like, the paper, and there's nowhere near the level of detail from the llama three paper in terms of transparency.
Speaker 1:And so that's another aspect of open source is
Speaker 2:Yeah.
Speaker 1:Sometimes people wanna know, hey. What other algorithmic tricks did you come up with? What are you what are you coming up with? It was interesting. One of the one of the most fascinating, leaks, I guess you could call it, from the Lama open sourcing process was that they had a bit of code in there that was just called do not blow up the power station or do not blow up the data center.
Speaker 1:And, basically, what they realized was that when they're training Lama, they're running they're pulling so much energy from the grid that if if they finish training run and then the power consumption drops, it will it will do something with the power substation and the transformers, and the the data center literally might explode or something like that. So Yeah. So, basically, what they did was they just said, hey. When we stop training and we're not doing all the matrix multiplication and all the math that you need to do to crunch all these numbers down to create the weights, just do random math. Just just just keep doing random math because that will at least the energy will be the same.
Speaker 1:Obviously, it's not efficient, but we need to, like, wind down the energy consumption slowly. So very funny.
Speaker 2:Yeah. It's like sprinting. Right? Like Yeah. Yeah.
Speaker 2:Yeah. If you're sprinting and then you try to just
Speaker 1:halt Yeah.
Speaker 2:Completely your knees.
Speaker 3:You wanna
Speaker 2:slow down. You know? Exactly.
Speaker 1:And so the smallest scout model is a 9,000,000,000 parameters, and this cannot run on consumer grade GPUs. And there was this funny interaction between oh, what's his name over at Google? He's an absolute legend. I forget. Anyway, one of the top Jeff Dean.
Speaker 1:Jeff Dean. He's, like, the greatest programmer in history. And someone was like, oh, this is such a bummer. I can't run the new llama on my consumer grade GPUs. And he was like, what are you talking about?
Speaker 1:Like, of course, you can't. And and somebody was like, oh, like, like, Google, like, GPU expert, AI expert, like, discovers what it means to actually have a consumer GPU.
Speaker 2:Yeah.
Speaker 1:Because the the like, whoever this was clearly was talking about, like, an NVIDIA gaming PC.
Speaker 6:Yeah. But
Speaker 1:I'm sure Jeff Dean's, like, consumer rig is probably, like, $50,000 because they're just like, here, Jeff. Like, why don't you just take the best thing of everything? Yeah. Even when you're training at home, you know, you you wanna be able to
Speaker 3:run this.
Speaker 2:Cost as much as a house.
Speaker 1:Yeah. Exactly. Exactly. And so there's also this this question about the the the claimed context token window. They're claiming 10,000,000, and it's certainly fall far above what the real context is.
Speaker 1:But it might not actually be 10,000,000. We're gonna get into this with Jeff. But there's this question of when you zoom out the context window and you get so big, just like a human, if you're walking around a library, you're not you might have access to every book in the library, but you can't actually recall all of that. Yeah. So LLM seem to be there's this debate right now in this in this take that the the really, really high context windows, maybe you get fuzzier as you get wider, just like a human.
Speaker 1:And so Yeah. That's where something like Rag and Search and Deep Research from OpenAI, like, it it is a big context model, but really what it's doing is it's like going searching a web page that's maybe 10,000 tokens, compressing that down, finding the the key insight, quoting that in. Yep. And so when you get a deep research report, it's not really that it's stuffing all of it into one context window. It's that it's doing this thing iteratively like an agent.
Speaker 1:Yep. And then there was
Speaker 2:Genetic search.
Speaker 1:And then this is where it gets controversial. So LM Arena, we talked about how they're scoring very high, but they used a special experimental version for LM Arena which caused the good score. That's not the version that was released. This discrepancy forced LM Arena to respond by releasing the full dataset for evals, and it does very poorly on independent benchmarks like AI DER. And so now there's so many different benchmarks out there that you can you can kind of game one or a few or the top ones.
Speaker 1:But if someone comes up and says like, oh, well, you're actually doing worse on Arc AGI. It's like, well, you didn't get a chance to fine tune on that. So so if you underperform like, a truly breakthrough genius LLM should just be better at every benchmark, even my benchmark of tell me a joke. And so, you know, the the the it's tricky. There's this game of, like, we gotta we gotta rank on the important benchmarks, but now there's such a long tail that you can't really can't really optimize for all of them.
Speaker 1:And then there's, an unsubstantiated post on channels Chinese social media that we covered, on Monday, that claims the company leadership pushed for training, for for for training more aggressively to meet Zuck's goals. But this was categorically denied by Meta leadership, and we should go into what Ahmad, over at Meta is saying.
Speaker 2:He says we're glad to start seeing Llama four in all your hands. We're already hearing lots of great results people are getting with these models. That said, we're also hearing some reports of mix quality across different services since we dropped the models. As soon as they were ready, we expect it'll take several days for all the public implementations to get dialed in. We'll keep working through our bug fixes and onboarding partners.
Speaker 2:We've also heard claims that we trained on test sets. That's simply not true, and we would never do that. Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementations. We believe the LAMA four models are a significant advancement, and we're looking forward to working with the community to unlock their value.
Speaker 1:Yeah. I think what's happening here is that we are hitting the pre training plateau as we've heard before. We talked to Drokesh about this, and and and that's, like, underwhelming. And then at the same time, everyone has such strong opinions about all the foundation labs. Like, you you meet someone, it's like, oh, you're a you're an anthropic guy.
Speaker 1:Oh, you're an x AI guy. Oh, you're you're you're a meta bull or whatever. And and so there's a lot of emotions that go into these things. At the bot I I think the correct frame of mind to evaluate llama four in is is what will people do with this that they couldn't do with a closed source model? And so when when when Google open sourced their dream studio, there were there were people that were able to fine tune that and create those magic avatars, which are now kind of the Studio Ghiblis, where you could upload a couple photos, get a get a custom avatar of you looking like Superman or something.
Speaker 1:And so what's cool about Llama four is that because it's open weights, it's easy to fine tune, and, also, it's not from deep sea, so it's a little bit less, politically controversial, but there's still a lot that you can do with it. It doesn't run on consumer GPUs now. I'm sure that won't be a problem. I'm sure that people are gonna figure out how to distill this and do a bunch of different things. And and when we go through the history of Llama, we will talk about some of the ecosystem that has actually cropped up, and it's exactly what Zuckerberg was describing in that.
Speaker 2:Wanna hear a joke from Meta AI? Please. Tell me a joke. Like Theo Von about Sam Altman. Sam Altman is the ultimate tech bro.
Speaker 2:He's out here trying to make AI do all the work so we can just sit around and think about how great we are. I mean, I'm pretty sure his five year plan is to invent a robot that makes avocado toast and brings it to him while he's meditating on a beach somewhere because that's the real dream. Right? Wow. I can't
Speaker 1:AGI has been achieved internally, externally, everywhere. Hang it up folks. Quit your job.
Speaker 2:Hey, I'm And to be clear, I'm not actually it's not clear that Lama I don't think Lama four is actually running on Meta dot AI yet.
Speaker 1:No. Think you're probably using Lama three.
Speaker 2:But
Speaker 1:These rollouts are always really staged. And and and in fact
Speaker 2:Was that the style of Theo Vaughn?
Speaker 1:It I I thought I was listening to Theo Vaughn. Yeah. You could have fooled me. That is remarkable.
Speaker 3:Was that
Speaker 1:a Theo Vaughn quote? It's remarkable how how spot on that is. You listen to Theo Vaughn's comedy.
Speaker 2:Making an avocado joke is, like, very 2032 coded. Yeah. Like, 2038.
Speaker 1:Hipsters are quirky. Yeah. Okay. Like, we're we're past that. Anyway, so there are mixed quality reports across different services using Llama four, and implementations should stabilize in a few days.
Speaker 1:This is kind of common when they roll out these big new models. They gotta figure out how to run them on on on all the hardware, get them into the data center, swap things over. Like, people say, oh, yeah. Switching in LLM, it is just one line of code, but there are more things to it, especially on the performance side. We've seen this with Studio Ghibli, like the GPUs melting, which I think we all believe is real because how many times I mean, this happened to me a bunch where I've said, hey.
Speaker 1:Make this make this image Studio Ghibli. And it's just like, hey, I stopped. And it's like, what? Like, no. Like, Instagram filters don't just stop halfway.
Speaker 1:But because talked
Speaker 2:about this with Aiden. Yeah. Or was it Aiden or no? It's Swix. Yeah.
Speaker 2:Swix was saying that the the models are already showing signs of needing rest. Right? Yep. Yep. Anyways.
Speaker 1:It's crazy. And so there's the the there's a bunch more going on. Let's move through this. So so there's there's couple themes that are sticking out in the discussion about, Meta's Llama four performance. The big one is, is just a general disappointment from the AI community, I think, based around the how much horsepower was going into this.
Speaker 1:So, the claim was that they trained on a hundred thousand h 100 GPUs. Of course, Zuck and Jensen have done the famous Jersey swap. He's one of the biggest NVIDIA customers. He can get the best he's not under any import restrictions. There's really nothing stopping him.
Speaker 1:And most importantly, potentially, is the fact that Meta can really, really go full send on the CapEx here because Yep. Zuck knows that, hey. If if if Lama doesn't go anywhere, we never we like, LLMs cap out. It's not important. Yeah.
Speaker 1:We're gonna use those hundred k h one hundreds to train the Reels algorithm better Yeah. Or or the new thing. That was the whole thesis behind Yeah.
Speaker 2:Or at the very least do our own Ghibli style
Speaker 1:Yeah. Yeah. Yeah. I was thinking about this. Like, if I wanted to if I was like the PM snap filter.
Speaker 1:If I was like the PM at Instagram, I would immediately implement the Studio Ghibli filter and just send every Instagram user a Ghibli of their most popular post or of their profile picture. Just prerender it all, just batch them all, and then send them and just say, hey. Do you wanna try the new filter? And everyone would be like, this is amazing. It would be this amazing viral moment.
Speaker 1:They could definitely do that, but it would be extremely expensive from a from a inference perspective, but they can probably afford it, and it would be cool and delightful. And I I I think they should do it. Anyway, so despite having fewer resources, DeepSeek claims to have achieved better performance with models like DeepSeek v three, and there are some benchmarks where DeepSeek is still outperforming Lawn four, which you hate to see if you're duking it out in the open source world. Jan Lakun stated that FARE is working on the next generation of AI architectures beyond autoregressive LLMs, and so this is a debate that we've been hearing for a while. Like, we probably scale is important, and we need to continue to scale, and we wanna do big data center build outs, but we also need new algorithms on top of those.
Speaker 1:The poster suggests that Meta's leading edge is diminishing and that smaller open source models have been surpassed by Quen with Quen three is coming. And then there's another debate about Meta's Meta's Llama four fell short. Scout and maverick have been released but are disappointing. Meta's AI research lead has allegedly been fired. The models use a mixture of experts set up with a small expert size of 17,000,000,000 parameters, which is considered small nowadays.
Speaker 1:Despite having extensive GPU resources and data, Meta's efforts are not yielding successful models. And so I think that there's a debate about George Hots was talking about this when GPT four launched. It was a mixture of experts model, and a lot of that is defined by Yep. The actual structure of the chips and the interconnect and what we talked about with light Lightwave. Is that
Speaker 4:Light matter?
Speaker 1:Light matter? Yeah. Just this idea that, yes, you can have a hundred thousand GPUs, but if they're not networked together really, really well Yep. You're maybe they're memory constrained. There's all these different parameters that can constrain you.
Speaker 1:And so Yep. You wind up having to fractionalize your your Yep. Your your LLM. And that can be fine if if 17,000,000,000 is enough and and and and you can route and there aren't any problems that require multiple experts or bigger experts. But clearly, in this scenario, a lot of people are disappointed.
Speaker 1:And so someone said, they left me really disappointed. You hate to be disappointed in
Speaker 2:the What's hobbies?
Speaker 1:Magical intelligence that's too cheap to me. Show some show some
Speaker 2:Respect.
Speaker 1:Show some respect for Zuck giving you something that cost a billion dollars for free.
Speaker 2:Something that that
Speaker 1:You like it?
Speaker 2:Three years ago would have been groundbreaking. Yeah. But but the expectations are extreme. Yep. They're they're spending, you know, almost High stakes.
Speaker 1:As much
Speaker 2:as anybody on this. Yep. And ultimately, it's becoming clear that ability to spend is not all that you know, it takes finesse too. Yep. And so it's like, okay.
Speaker 2:Meta can
Speaker 1:Yeah. And so people are people are joking. I'd like to see Zuckerberg try to replace mid level engineers with Llama four. And one commenter joked that perhaps Zuckerberg replaced engineers with Llama three, leading to Llama four not turning out well. Brutal.
Speaker 1:Ouch. Another commenter suggests he might use he might need to use Gemini 2.5 Pro instead. I love that people are just, like, absolutely trash talking with the most, like, industry jargon here. Like, oh, man. Like, this is more of, like, like, you're you're you're Llama four coded, not Gemini 2.5 pro code coded.
Speaker 1:It's like, guys, having too much fun. Anyway
Speaker 2:Yeah. This one was even more brutal. Yeah. Somebody's saying that calling it a complete joke and expressing doubt that it can replace even a well trained high school student.
Speaker 1:Oh, yeah. Yeah. I mean, in general, I I think my takeaway is like, you know, Lama four might be underperforming, but you can't sleep on on AI even for a minute. There's a new model every day. Yeah.
Speaker 1:You can't sleep on AI innovation, but you can sleep on an Eight Sleep. So go to 8Sleepcom, nights that fuel your best days, turn any bed into the ultimate sleeping experience.
Speaker 6:Right.
Speaker 1:Use code TBPN. So this was the debate that was popping up from Sean who came on the show last week. He says, unpopular opinion right now, but llamas 4 10 million token window will finally actually end the long context versus rag debate. That's retrieval augmented generation, but not the way that other guy is thinking. And so this was a very like, I I I'm gonna we're we're gonna talk to Jeff about this because I was like, hey.
Speaker 1:You gotta come on the show just to explain this. But I think what he's saying is that is that huge context window is not a perfect substitute for Rag, and Rag still might have a place in the future of AI agent development, AI implementation. But I'm excited to dig into this because I didn't fully understand this post, and I wanna know more here. Anyway, let's let's move on to another LLM eval eval that happened with Stagehand. The results are fascinating.
Speaker 1:What LLMs can actually do consist can actually parse deeply nested structured data like a DOM, doc document object object model, an a 11 y tree, which is, like parsing a website, basically. And so, this founder, benchmarked LAMA four against other models and found that LAMA four came in, maybe eighth, below GPT four o mini, below Claude 3.5 SONNET, and below DeepSeek v three, and then Gemini 1.5 Pro, two point o Flash are all higher. Yep. And so at least from his benchmarking, he thinks that, you know, given that this is the latest and greatest and newest product from Meta, it's underperforming stuff that's been on the market for a couple months. So not the best information.
Speaker 1:Then there's actually an example here that from Vic that I thought was pretty good. This is the clearest evidence that no one should take these rankings seriously. In this example, it's super yappy and factually inaccurate, and yet the user voted for llama four. The rest aren't any better. So this is what this is what a an LLM Arena actually or LLM Arena, like, interface actually looks like.
Speaker 1:So there's a prompt, and then you see both of these. You don't see the names of the models, I believe. But Yep. You have to pick which one you like more. And so the question was, what is the latest season of Fortnite?
Speaker 1:And Claude three point five Sonnet said, Fortnite chapter five season two titled Myths and Mortals is the current season. It begins on 03/08/2024 and features Greek mythology theme with characters and locations inspired by Greek gods and legends. Then Lama four maverick experimental says, a super timely question. Fortnite seasons are very short lived, and they last about twelve to and it's like yapping, yapping, yapping, and then it and then it gets it wrong. It says the current season is Fortnite season OG, also known as chapter four, season six, which I believe is, like, before chapter five, obviously.
Speaker 1:I don't know enough about Fortnite to fact check this, but it seems like it's wrong. And then and then there's a bunch of emojis. And so there's a debate from LM Arena, and they go on to write, we've seen questions from the community about the latest release of Llama four on Arena to ensure full transparency. Release we're releasing 2,000 head to head battle results for public review So anyone can go see these and decide for themselves, hey. Did did the folks at LM Arena get this wrong, or are they or are they happy?
Speaker 1:Early analysis shows style and model response tone was an important factor demonstrated in style control ranking, and we are conducting a deeper analysis to enjoy to understand more. Emoji control because some people might just be voting, oh, I like the emojis. Even though that's kinda taking you away from the facts. Yeah. Or they might just be like, hey, I like it more for Yeah.
Speaker 1:Some people
Speaker 2:want a super short answer.
Speaker 1:Yeah. Exactly. Exactly. So there's all these, like, human biases that are coming in. And Yeah.
Speaker 1:At this point, we are in this, like, qualitative, you know, like the OpenAI guys say, just talk to the model. Figure out the vibe of the model. People seem to like Claude Sonnet and just the vibe of that of that model. And so, there's still more debate over, company leadership potentially, blending test sets from various benchmarks during the post training process. Meta has denied that allegation, but there is a lot of debate raging.
Speaker 1:And, Ethan Molik says the LAMA four model that won LM Arena is different than the released version. I have been comparing the answers from Arena to the released model. They aren't close. And so what he did was he was he looked at the the actual results that were posted on Ella Marina and then same query on the llama four model that was released, and this is evidence that they that they went to Ella Marina with a separate model. Yeah.
Speaker 1:Which is controversial. So
Speaker 3:Not what you wanna do.
Speaker 2:I mean, it doesn't inspire It's
Speaker 1:aggressive approach, Cotton. Yep. But you gotta get to
Speaker 2:see what Jeff thinks about all this.
Speaker 1:And so Ahmad says, seems like there was a lot of truth in this leak from two months ago. Llama four is beyond disappointing. It's a model that shouldn't have been released. And this is from probably blind or something. Meta Meta Gen AI organ panic mode started with DeepSeg v three, which rendered Llama four already behind in benchmarks.
Speaker 1:Adding insult to injury was unknown Chinese company Yeah.
Speaker 2:I remember
Speaker 1:this training budget. And and there was that meme of of Iron Man being like, they built this with screws in a cave. You know? Like like, we you have you have a trillion dollar budget. But if you wanna control your budget, you gotta go over to ramp.com.
Speaker 1:Time is money. Save both. Easy corporate cards, bill payments, accounting, and a whole lot more all in one place. And so Jeff is gonna be joining in just a minute. We will run through some of the timeline and break down what happened with La LaMa and the development here.
Speaker 1:The the most interesting thing that I found when I was digging through the history of LaMa was the company's notable foray into large language models was an academic tool called galactica, which most people hadn't heard about. That actually backfired months earlier. The demo was pulled after only three days amid criticisms that it confidently generated false information. And if you remember g p t three, like, it was Yeah. You get some wild hallucinations out of that thing.
Speaker 1:Yeah. And so Meta's leadership was cautious at that point as generative AI fever swept tech. And once ChatGPT came out and it became like customers want this, then they started to push forward. And so there was a team at Meta's FAIR Research Lab in Paris. I believe this is the one Leon Lecun is involved in.
Speaker 1:They were they were hard at work on large language model they believe could compete. LAMA one was the fruit of that effort, a set of models ranging from 7,000,000,000 to 65,000,000,000 parameters trained on a rich diet of text, which, of course, they have because they have every piece of text. Not only do they have everything in the Facebook ecosystem, but they also scrape every link that's shared to Facebook, which is Yep. Every link ever. And so they have the
Speaker 4:entire Just a few.
Speaker 1:Scrape. OpenAI's headline grabbing g p t three, Lama wasn't offered as a public chatbot or API. Instead, its initial release, Meta in its initial release, Meta made the model's weights available on a case by case basis to academic researchers. So you would just email them and say, like, hey. I'm at Stanford.
Speaker 1:Like, can I have the weights? And they'd say, sure. Sure. And then, of course, that leaked immediately, which is awesome. So it was a noncommercial license.
Speaker 1:So they would send you the weights, and then you could mess around with it and test. But then, of course, someone it was quote, unquote open research, not open source, so you couldn't build a company on top of it. It was more like a research paper with some downloadable code, but this leaked onto the Internet, and then developers everywhere had their hands on a GPT class model in raw form. And so that spawned these fine tuned models, Alpaca and Vicuna, which are derivatives of llama. I think they're related mammals.
Speaker 1:And so then people started fine tuning with instructions and getting it more into a chat mode, and then Llama became a product that had close to a ChatGPT like experience. And so I'm excited to talk to Jeff about this. Yeah. I'm not sure if he's in the temple. He's in the waiting room.
Speaker 1:Let's bring
Speaker 2:him in.
Speaker 1:Jeff, how are you doing?
Speaker 7:I'm going.
Speaker 1:That's great. How have you been following? Well, first, welcome to the stream. Can you, do a little introduction? But then I wanna hear your reaction to the Llama four news, how you're processing, and what it means for your business.
Speaker 7:For sure. Yeah. I'm Jeff Huber, the cofounder of Chroma. We're working on retrieval for AI and broadly working with developers kind of across the ecosystem to build production systems of AI. A lot of it is focused on, you know, business applications, you know, good old fashioned business process automation.
Speaker 3:Mhmm.
Speaker 7:And so always super excited to, like, see new, like, open source model drops.
Speaker 1:Can we go to this, this post from Sean? He says, unpopular opinion right now, but llamas for 10,000,000 token window will finally actually end the long context versus rag debate, but not in the way that other guy is thinking. What does he mean by that?
Speaker 7:Yeah. Yeah. For sure. I think, you know, Silicon Valley has a tendency to be sort of extremely intellectually shallow. This is both a strength and a weakness of the Valley, to be clear.
Speaker 7:And in our view, like, AI is not this, like, deus ex machina, this, like, technical machine god, you know, where all of the information of all times always gonna be in the weights of this model. You know? This is really just a new form of computing. And so in the same way that we have a memory hierarchy in classic computers, right, we have the CPU, RAM, disk, and network. We are also going to have a similar memory hierarchy in language models.
Speaker 7:And, again, it already exists today. We have the actual sort of transformer attention heads. We have the context window. We have the retrieval system
Speaker 3:Mhmm.
Speaker 7:And tool use, and these things have different trade offs. Right? You think about kind of access speed, capacity, and cost. There are trade offs to all of these things. You know, I think, like, you know you know, saying something is dead, like, plays pretty well on Twitter.
Speaker 1:Oh,
Speaker 7:yes. I've actually gotten myself into some trouble where I some people were were allegedly ship posting and not actually sort of sincere posting about this. Yeah. And I didn't know. Right?
Speaker 7:Because they're hard to tell, like, what was a ship post and what isn't.
Speaker 3:Of course.
Speaker 7:But, you know, the bait the bait is strong on Twitter. And so what Sean is saying is actually that, like, we've all been there's a certain class of people who are, like, long context is all you need. Again, these people are probably, like, 21 years old. That's fine. We love them.
Speaker 7:But they just haven't seen how, like, real systems depend on trade offs between speed, cost, and accuracy. And, like, 10,000,000 tokens is not a panacea. You need to keep information outside of the context window. You need to give developers and programmers control over what information's inside the context window. You know, even these, like, needle in a haystack tests
Speaker 1:Mhmm.
Speaker 7:Like, are not actually that representative of, like, real world utility and reliability of long context windows. Mhmm. You know, they mentioned in the training for long o four, they don't even have passages that are longer than, I think, two fifty thousand tokens. And so anything past two fifty thousand is just synthetic data that were just made up. And so what Sean is saying is that, like, well, what 10,000,000 is finally that context window for long before is finally gonna put the rest is that long context windows are all unique.
Speaker 7:He's going you know, the the 10,000,000 the 10,000,000 context window length is going to finally, hopefully, you know, make people understand that, like, no. There are different things here that are good at different things, and we can put them together to create a good system.
Speaker 1:Should we amenitize the eschaton?
Speaker 7:I don't know how you knew that I was writing about this this morning. No. We actually should not. Yeah. We actually should not.
Speaker 7:It's always it's always been a trail of tears. Let's not do that.
Speaker 1:No. You you you explained that to me a while back. I had fun with that. So so let's talk about Lama four. How should how should startups be thinking about, Llama four as tool in the toolkit against the other options that they have?
Speaker 7:Yeah. I mean, I think, like, you know, Twitter is a quick to and research in general, right, is quick to sort of view state of the art as the only thing that matters. Mhmm. And I think that actually, in many cases, first is overrated. You know, we've seen, you know, going all the way back to sort of the the the Slack and Teams charts.
Speaker 7:Right? You've seen the famous chart, Slack versus Teams. Right? Distribution is incredibly important as long as, you know, sort of the incumbents can wake up and can catch up. Know, I would not bet against Zuck and a hundred billion dollars of profit per year.
Speaker 7:You know, I think that, you know, Zuck also is in some sense playing a different game. Like, he's not trying to build, like, the the sort of very best open source, like, chat experience for consumers. Mhmm. What Zuck sees, I think rightly so, is that, you know, having an open source model, which is really good, is good for the ecosystem and is good for Meta. And, you know, most businesses don't love using closed source models.
Speaker 7:They want to use open source models for all kinds of reasons, you know, privacy, security, continuity, cost. You know? You can build your startup on g p t four, it's amazing. And then, you know, there's a new version out, and OpenAI deprecates, you know, the old version. Right?
Speaker 7:And all of a sudden, all of your prompts don't work the same. And so, you know, open source models are going to continue to play an extremely important part in the ecosystem. Now, obviously, like, you know, the DeepSeek r one launch a few months back, like, totally took everybody from surprise. You know? I think we're still in the early innings of this stuff where, like, good ideas can come from anywhere.
Speaker 7:And oftentimes, good ideas do come out of the sort of group think, you know, context of Silicon Valley. Right? Yeah. And so, you know but, yeah, I I wouldn't I wouldn't bet against that.
Speaker 1:Do you think, there's an opportunity to build a company like Red Hat in Linux but for LLM implementation on top of something like Llama? Or is that, like, a crazy idea that doesn't really match to the modern foundation model landscape?
Speaker 7:I mean, the bull case for Llama for Meta is that it's actually more equivalent to how Meta open sourced its data center Mhmm. Kind of layout and
Speaker 1:Yeah.
Speaker 3:Yeah. Act
Speaker 7:act. Right? And that's the bookcase for for Meta. It's actually in industry, a sort of forms around that
Speaker 1:Yeah.
Speaker 6:And become
Speaker 7:the standard. Right? That's sort of why you know, what argument for why they did it. Yeah. In terms of, like, the Red Hat four, you know, I think that, like, Red Hat four works well for operating systems.
Speaker 7:Mhmm. But I don't think of an LLM as an operating system. I think an LLM is much more like a CPU. Mhmm. Right?
Speaker 7:It's information processing unit. And so, obviously, it's it's a new thing. It's not exactly like a CPU. But, yeah, I'd have to read it about that some more. I'm not sure.
Speaker 2:Yeah. If you're running Meta AI, what would you do from here? Not to put you on the spot or anything.
Speaker 7:Yeah. I'm not to be clear, I'm not running that AI. I've not received that job offer at all. I mean, I think that, like, you know, you have to keep going. You can't stop.
Speaker 7:I think focusing on, like, the business use cases is pretty important. I think focusing actually also what developers actually need and want out of models is also very important. You do see a lot of, like, model drops that come out, but they don't actually provide the real hooks. And they do very well on the benchmarks. Right?
Speaker 7:They do very well on, like, on the public leaderboards, but they don't actually provide the hooks that developers need to do, like, good tool use Yeah. Or reliable structured data output or the practical stuff, right, that developers actually want out of models. And so, like, if you want to create a groundswell of developers that, like, love your tools, like, do the developer experience part. Like, meet them where they are and, like, give them all the hooks that they need, and don't just stop at, like, hey. Look.
Speaker 7:You know, we hit state of the art benchmarking. Aren't we special?
Speaker 1:Yeah. Is there is there a narrative here where maybe they're trying to do everything all at once and instead should focus on, like, llama is amazing at code, or llama is the next version of llama five is, like, all about tool use or super great at reasoning or just, like, the best at deep research or just the best at at image generation, for example. Like, it feels like there's kind of a bifurcation of the market, and maybe the opportunity is actually to to laser in on something that's high value, but then let the other stuff kind of, you know, simmer out there amongst other teams?
Speaker 7:I mean, a focus is probably always a good, you know, lesson for all of us. Right? Do less and do it better. Yeah. And so, you know, presumably, it's also true for Meta.
Speaker 7:I think also, obviously, unlimited capital can both be a blessing and a curse in that way. Yeah. Again, like, focus on developers. Yep. Developers want.
Speaker 7:I think that's the beachhead. That's how you win the b two b market. If you win the b two b market with your open source models, like, you get all of the sort of downstream effects that you want. You know? You don't need to beat, you know, g p d five on some
Speaker 1:Yep. Do you think that part of the narrative that we're seeing around llama four is just pretraining scaling, hitting a wall, a need for new algorithms, a need for a deeper focus on reasoning, and maybe even whatever comes after that?
Speaker 7:I mean, you know, I I you know, so you mentioned a moment ago sort of, you know, emetizing the esketon. Right? You know, throughout history, you know, every exponential that we've observed eventually results in a sigmoid curve.
Speaker 1:Sigmoid. Yep.
Speaker 7:You remember early COVID. Right? Yep. The the fur of, like, oh my gosh. You know, the the the Twitter guys doing their thing where they're like, well, if if double the amount of people get it every day, then everybody on Earth will have had it seven times in
Speaker 1:the next week. Hundred billion people will have it.
Speaker 7:Yeah. Exactly. Exactly.
Speaker 1:Yeah.
Speaker 7:And so, you know, I think that there are laws of physics here. I think that there are, you know, diminishing they're clearly diminishing marginal returns. Right? We're sort of spending 10 x on compute. We're not getting 10 x or better models.
Speaker 1:Yep.
Speaker 7:At least evidently not yet. Yep. And so, you know, the transformer is incredible. It's amazing. It's a you know, technology is probably as important as the invention of electricity.
Speaker 7:It will probably, you know, bring about a increase in GDP that is on the order of the industrial revolution or greater. And so I think we should not, like, minimize this technology and sort of sort of boil it down to, oh, this is sort of just dumb pattern matching.
Speaker 1:Yeah.
Speaker 7:Right? By the same token, you know, we also should not believe that all technology yeah. We're gonna be able to rent seek on sort of forever.
Speaker 1:Yep.
Speaker 7:So, yeah, new things are definitely needed. And, you know, I think doing that, like, an inference time compute, internal change of thought is really promising. Mhmm. And, you know, you I look at the stack today, and I think about how sophisticated computers are, right, and how good architectures are and operating systems and kernels and compilers and all of this stuff. And, you know, we're just, like, in the baby phase today of AI.
Speaker 7:Like Yeah. It's just in its infancy, and there's a lot to build.
Speaker 2:From a recruiting standpoint, have you run into some of these super aggressive noncompetes that we're seeing? There was a headline today about, you know, Google basically paying engineers to not
Speaker 1:Not work at Chroma.
Speaker 2:Not not work for a year when, you know, they could be working at Chroma or any of these other labs.
Speaker 7:I mean, yeah. You know, airplane red dots dot p n g. Right? I guess, like, if I was
Speaker 1:Oh, yeah.
Speaker 7:Affected by that, I wouldn't know it. So Yeah. Yeah.
Speaker 1:Yeah. Yeah. What what can you take us through some of, what Chrome is building today and where customers are getting the most value? I've talked to you a little bit about some of the use cases, and I think they're underrated potentially in, like, how simple and obvious they are when you explain them, but I want you to take me through some of the modern context.
Speaker 7:Yeah. I mean, you've heard left and right on the Internet now for, like, three years all about this acronym RAG. I don't know why anybody would ever name something RAG. That seems like a pretty dumb idea. We just call it retrieval.
Speaker 7:And, of course, the idea with retrieval is that if you want to build an AI system and you wanna be good at something, well, you need to teach it. How to do that? You gotta teach it about your data. You gotta give it your instruction set. Right?
Speaker 7:And, updating the weights of the model is not a very good idea because you cannot really deterministically control that. Right? You can fine tune, but what you're gonna get the other end, you know, again, you don't really control. And so, giving the system access to a repository of instructions or knowledge about your organization, your business problems, that is something that you can control. And that's what that's the problem that retrieval solves.
Speaker 7:And so, you know, I've you know, we talked to, like, enterprises and businesses building, like, useful applications. I think, like, today, 90 plus percent of it in enterprises is retrieval of meta generation or it's, you know, using retrieval. It's sort of a chat on top of unstructured data. You know, I think if you zoom out though and view, like, what is really AI, right, AI gives us the primitives and the ability to process unstructured data in a common sense fashion. And you think about the scale of data.
Speaker 7:Right? Even today, like, inside of enterprises, like, unstructured data is, like, 10 times the size of unstructured data. We have 10 times more unstructured data. And then you consider, like, the real world. Right?
Speaker 7:If we're actually, like, putting robots out in the real world, like, how much unstructured data they're gonna be ingesting and needing to process and reason about an action on. And, like, it's just sort of, like, you know, gonna be a thousand x, 10,000 x, a hundred thousand x of data that we have today. And so, like, that's kind of the direction. I think it's, like, not so much, like, sort of this, you know, sort of a simple, like, one human, one AI talking at a chat stream back and forth. Mhmm.
Speaker 7:But it's, like, real embodied intelligence, which Yep. You know, you could call an agent. You can call a robot. You know, I don't love any of these terms. But, like, really, the goal here, ultimately, I think, anybody who's built something practical is building something that's reliable.
Speaker 7:You know, you think about, like, we've been seeing self driving touted as, like, this technology for, like, ten years. And, of course, if you live in San Francisco, you can use Waymo, and it is actually incredible, but it's taken ten years. Yeah. The gap in demo and production has always been so great in AI. And so, yeah, if you're building practical in AI, your big question as a developer is like, okay.
Speaker 7:The demo is super sexy and cool, but how do I actually make it work really, really, really well and reliably? And the ability for these systems to sort of, like, self improve or improve under human guidance, I would say, is, like, the biggest thing that's underrated today. And, of course, you know, we think that, like, retrieval plays a key part in kind of how that happens.
Speaker 1:Can you, concretize that a little bit by, walking me through, like, a a potential use case for us. I mean, we stream three hours a day. We're probably emitting, you know, I don't know, tens of thousands of tokens every day. If I used Whisper, I tran I I transcribe every minute of our show. I could search that through, you know, fuzzy search or deterministic if I wanna just search, like, every time I mention artificial intelligence directly, find that.
Speaker 1:Or I could try and fine tune Llama four on it, and maybe it just hallucinates like, oh, yeah. John was talking about this randomly. Yep. How how would I how would I use Chroma to create a a more definitive index of every time John or or guest has talked about artificial intelligence or or llama in, you know, hundreds of hours of video. Like, is that something you could do?
Speaker 7:Yeah. Yeah. We were seeing some, like, kind of fun things today where, like, you know, people are taking the corpus of all of their writing or all of their speaking and to kind of, like, quote, teaching the model it. They're loading it into a tool like Chroma.
Speaker 8:Sure.
Speaker 7:They're looking at the language model, and they're getting end users the ability to, like, chat with John.
Speaker 6:Yeah. And, like,
Speaker 7:you know, see what John thinks about artificial intelligence. Right? Yeah. And that's exactly right. So we've got all those transcripts get processed.
Speaker 7:They get broken into pieces. They get indexed Yep. And searchable in various ways.
Speaker 1:Yep.
Speaker 7:And then user asks the query, you know, hey, John. What do you think about the latest, you know, llama release?
Speaker 3:Or maybe
Speaker 1:they didn't
Speaker 7:say llama. They say that the latest release, list AI from Facebook thing. Right?
Speaker 1:Yeah. Exactly.
Speaker 7:Llama. Yeah. Like, the search is good enough that it can, like, find all the relevant things that you've said, and then the elm can, like, respond as you Yeah. Because it kinda ground itself in the things that you've said before. So that's a
Speaker 1:And and so it's basically taking, like, different blocks of text, different ideas, and then kind of vectorizing them into some way that's not necessarily human readable, but Yeah. It can still it's basically, like, better fuzzy search in many ways. Not to degrade what you're doing, but it's amazing. It's magical and and super powerful.
Speaker 7:Fuzzy. Yeah. I mean, fuzzy search, it is, really useful Yeah. When people, like, are not, you know, experts in their own data. Right?
Speaker 7:Is that if you're Google Drive, you know how to search for stuff
Speaker 8:pretty well. Right?
Speaker 7:But, like Yeah. Yeah. Your users don't know how to search for the stuff that you've said before. And so that's the kind of the power of, like, embeddings and vector search.
Speaker 1:Yeah.
Speaker 7:Yeah. Tool in the toolbox. Not a panacea. Again, we're not advertising the esketon here. We're not in it.
Speaker 7:Right? But it's, like, a very powerful tool, and people are getting a lot of value out of it. Yeah.
Speaker 2:I'm curious your reaction to AI 2027. If our our our point of view, generally, just from all the conversations we've had, is that, like, sort of model progress and advancements could sort of slow and that would be fine just because there's so much value to unlock out of the underlying models. I'm I'm clear I'm curious to think how you processed just the forecast generally Yeah. And maybe maybe take it from there.
Speaker 7:We think the capability overhang we have in the models that we already have today and we will have absolutely in six months is immense. When you think about, for example, the possibility of democratizing access to state of the art services to everybody on Earth. Like, it is very possible the poorest people on Earth today or, you know, in ten years will have access to better health care, better legal representation, you know, better financial services than, like, billionaires have today. I think that's, like, entirely possible, and that's impossible with the model we have again today. And so the capability overhang is immense.
Speaker 7:You know, every time an extremely long essay from a sort of effective altruist drops, right, know, they clearly tend to make waves. I think if you tell people that the world is going to end, they're going to pay attention. And, you know, I'm just, like, not that, frankly, that interested in, like, secular eschatology about, you know, apocalypse in the end of the world. Right? Like, there's a natural tendency for all humans to believe that, like, we are the chosen ones living in the special time in the last days.
Speaker 7:Right? You know, even Fukuyama, right, you know, wanted to, you know, you know, sort of, like, end history. Right? And so Yeah. Natural human tendency, you know, this is, again, the immunitizing the eschaton.
Speaker 7:We've mentioned it three times now. It's like, it's really dangerous. Right? Like, you think about, like, what's happened throughout the last hundred years in, like, really, you know, you know, the the hundreds of million people that have died, you know, across, like, different world wars and different, you know, dictatorships. Like, it is oftentimes it's, like, messianic complex that leads to a lot of that.
Speaker 7:And so I don't know. I'm just like I think it's I I see it as entertainment more than anything else.
Speaker 3:Yeah. On a more practical
Speaker 1:note, like, I go to The Wall Street Journal's website. I just try and search for an article, and they say, oh, search is powered by AI. It's not clear it's clearly not powered by AI because I cannot fuzzy search at all. I can't say, oh, I know that it mentioned this person, and I think it was about this, and it was in the last week. It's not there.
Speaker 1:What does it take to actually roll this stuff out? Are are these even potential customers of Chroma, or is there another company to be built here? What do you think about that?
Speaker 7:Yeah. Wall Street Journal, if you're watching, you know, send me an email. Great. Chat. Yeah.
Speaker 7:All that's very doable today. I think that, you know, the reality is that, you know, your classic, you know, the future is already here. It's just not evenly distributed yet. Right? Like, you know, any technology of consequence, even if generationally important, you know, still takes decades to roll out.
Speaker 1:That makes sense.
Speaker 7:And, you know, that's just the same industry here. So
Speaker 1:great. Well, thanks so much for stopping by. We gotta move on, but this was a fantastic conversation. We'll have to have you on.
Speaker 4:Thanks for coming on,
Speaker 2:Jeff. More.
Speaker 1:Really appreciate it. Talk to you soon. And, we got a big funding announcement. We're shifting gears. We're out of AI and into manufacturing.
Speaker 1:Gonna talk tariffs. Gonna talk industrialization. Another theme we love on this show. We have some big news.
Speaker 2:And I just wanna know, was this fundraiser announcement intended, always intended to go out today or did they bring it did they they bring it up
Speaker 1:Oh, because of the tariffs.
Speaker 2:Tariffs and everything because the timing is possible. Just too good.
Speaker 1:So Jay says, today, I'm excited to launch the Advanced manufacturing company of America. We've raised $76,000,000. Let's hear it. A massive round coming out of stealth. From Caffeinated Capital, that's Raymond Tonsing, Founders Fund, Lux Capital, Andreessen Horowitz, and others, the best time to build this business is right now.
Speaker 1:Yeah. No joke. Yep. But the real work began decades ago, and they launched
Speaker 2:And he just decided I'm gonna get every big fund. Yeah. I'm gonna just get them all.
Speaker 1:Yeah. It's great. We ran
Speaker 2:a process. He says Yes to everyone. Take a bit from everybody.
Speaker 1:And it's great. They they put out a four minute video produced by James Jason Carmen, Store Company. It's beautifully lit, beautifully shot, and they brought in you know, we've been hearing for a long time that the legacy manufacturing companies are, run by, folks who are aging out, and maybe they don't have the next generation lined up to take over the business. Well, they Yep. Sat down.
Speaker 1:They interviewed one of those folks, and it's a fantastic video. You should go check it out. Anyway, is he ready to come on in the studio? Let's bring him in and hear the news from him directly. Welcome to the studio.
Speaker 1:How are doing? Congratulations.
Speaker 7:How are you guys?
Speaker 1:We're fantastic. Thanks so much for taking the quick moment to chat with Can you introduce yourself, the company, and what's the news today?
Speaker 9:Absolutely. So my name is Jay Malik. I'm the CEO of the Advanced Manufacturing Company of America. We call it affectionately AMCA. And and so, you know, what we what we do is we design, manufacture, and certify the next generation of critical products that go into all aerospace and defense systems.
Speaker 9:So that's both existing and new systems, you know, the stuff that that Boeing makes, and the stuff that Enderal is gonna make.
Speaker 1:Okay. Can you break down a little bit more of, like, what the first products that you'll make will look like? We've heard about what Hadrian's doing. We've heard about, you know, injection molding plastics. Like, there's a lot of different buzzwords.
Speaker 1:Obviously, everyone kinda wants to do everything in the long term, but what are you focused on first?
Speaker 9:Yeah. So first, let me let me just start high level. Right? Like Please. When we talk about the aerospace and defense primes like Lockheed or Boeing, they don't make anything today.
Speaker 9:Right? They've outsourced a lot of their manufacturing and engineering to thousands of suppliers. Some suppliers are focused on high volume manufacturing, things like wire harnesses, machine parts, which are the haters doing injection molding. There's a lot of great suppliers that are focused on that. But there are also hundreds of suppliers that are focused on critical engineered products.
Speaker 9:Mhmm. Those are the products that that that, you know, basically, determine system success or failure and are often, you know, very, very highly specialized. So stuff like avionics products, power units, you know, specific engine products. And so we're focused in those areas, in the most critical areas where you need to both engineer and manufacture at relatively low volumes for the end customer for their system to succeed. So we're focused on a pretty different, I would say, part of the market compared to most of the sort of software defined manufacturing startups that you often see today.
Speaker 9:In terms of where we're starting, we're starting, you know, almost entirely on avionics. You know, the the part of the plane or the ground control system that involves, you know, controlling it. Right? So the stuff we've a cockpit, for example. Mhmm.
Speaker 9:And so we're focused on things like switches, panels, displays, power units, you know, things that are are critical to the pilot if it's a manned system. And, you know, if it's an unmanned system, critical to communication, and, and executing on the mission. So those are the areas that that we're we're mostly focused on right now.
Speaker 2:Can you talk about the timing of the announcement? Was it just a happy accident you were always planning to to go out this week or or did you pull it forward due to the everything in the news?
Speaker 1:Little minor news this week.
Speaker 9:We we we were supposed to launch this week anyway, but, we actually had a few reporters that I think got scared of the tariffs, you know, didn't wanna cover anything. And so we basically said, I'm not sure if I'm on a curse of this podcast, but f that, and said that this is actually the best effing time to, like, you know, take our our company public. So we just did it. And so yeah. It was it was planned, but but obviously timing is definitely in our favor.
Speaker 2:Can you talk about, this is obviously a big raise to come out the gates with. Can you talk about, you know, kind of the use of of proceeds and and kind of like I'm curious about kind of like how you're thinking of the the structure of the business generally.
Speaker 9:Yep. So we're gonna be we've already acquired one business that is a critical avionics supplier, which you saw a video or some people may have seen a video about. We're going to be acquiring probably another two to three of them over the next, you know, twelve to eighteen months. We're also going to be doing our own clean sheet, you know, design and development of adjacent products in this space with our own manufacturing and engineering talent. So it's a hybrid approach.
Speaker 9:You know, I'm a firm believer that especially in this, you know, area of the supply chain, you can't you can't just hack your way into it. You also can't just be a private equity firm and buy and and price it up. You know, that's not gonna achieve what, like, you know, companies like Andro want to achieve, you know, for their customers. And so we're taking a a hybrid approach where we're buying, you know, companies with products that we think are gonna be hard, you know, to just redesign from scratch, and also developing ones that we think we can do a great job of ourselves.
Speaker 2:And is part of your advantage over traditional private equity is just the time horizon you're thinking about of saying, like, we don't need to come in and just, like, immediately cut cost by 50% and and increase, you know, pricing to and then hopefully flip the business in three years. Like I imagine you're buying to hold and that's part of why somebody would wanna sell to you in the first place, I imagine.
Speaker 9:Yeah. It actually goes deeper than that. So I would say the one thing that I have learned building this business so far, I believe, you know, it's it's still early, is is that owners don't really necessarily care just about the time horizon and your ability, obviously, to underwrite, you know, the deal. They also care about, one, not selling to MBAs, not selling, you know, to to traditional finance people. You'd be surprised.
Speaker 9:It's a big it's a big thing for them. And then second, that you know what you're doing. Meaning, like, you know, you're not a bunch of, like, search funders. You're not, you know, a bunch of people, you know this is stuff from the MBA arguments. But, like, you're not a bunch of people that haven't spent time in manufacturing, you know, shop floor, etcetera.
Speaker 9:And so our entire team, are engineering and manufacturing folks. Right? We spend our entire careers, you know, designing things for SpaceX, manufacturing things. You know, we're also young, which I think people like to see, when when they're selling their business. They know that just looking, you know, looking at the person across the table that they're gonna be there for that for the next twenty or thirty years.
Speaker 9:So I'd say all of those things combined, make it a pretty strong pitch, for for wine to sell to us.
Speaker 2:Did you have this idea in mind or a rough idea of it when you decided to, go back you know, enter your next chapter. I remember it felt like a year ago when you decided to move on from from active investing. It felt like I just remember that like instantly you shared it and it just was like everywhere.
Speaker 1:Yeah.
Speaker 2:Yeah. And because people at that time, was like every non deep tech, hard tech investor was like starting to pile into the category.
Speaker 1:Oh, yeah.
Speaker 2:And then everybody's like, wait.
Speaker 1:Wait. If you can't do it, I'm screwed. What are we doing?
Speaker 2:It didn't slow anything down obviously, but I'm curious like, you know, kind of the origin. What am I
Speaker 1:doing getting in this week?
Speaker 2:Yeah. I'm curious like how how the idea and and the opportunity came together.
Speaker 9:Yeah. So I, you know, I spent three or four years, obviously, my career at Countdown. When I was 24 years old, you know, I started the firm. I I spent a lot of time, you know, with with manufacturing startups and also with mom and pop suppliers. Like, as part of my diligence for whether I should invest in companies, I would, you know, talk to mom and pops.
Speaker 9:And so I'd spend, you know, three or four years in the space, and it didn't really click, I think, after I shut down Countdown that one, that mom and pops, you know, have, both the expertise, and, you know, it's in some cases, the qualifications that you need in order to develop, you know, and manufacture and and, you know, bring the product to market. And then it also didn't occur to me, obviously, when I was venture investing, that, you know, maybe there is a path where you can combine, you know, the mom and pop, you know, advantages with the spirit, the culture, the talent of a start up. Right? Like, when you're venture investing, you know, three or four years, you know, every single day, your your mind is just like, you know, start up, start up, start up, new things, new things, new things. That's you're not even able to think about what does the future look like using something that already exists.
Speaker 9:Right?
Speaker 1:Mhmm.
Speaker 9:And so it wasn't until I had actually shut down Countdown, had had, like, a month and a half to reflect, think about what I have learned, wrote down some wrote down some key, you know, themes, and and then, you know, sort of iterate from there. You know, talk to people, talk to customers industry, talk to people at companies that are already, you know, building very successful ones both in start up world and in mom and pop world. And that's when the vision started to come together. Like, hey. I am I'm uniquely in the center of these two movements.
Speaker 9:Right? Like, I I helped, I think, start and invest in a lot of startups in this space. And and all at the same time, I'm I I know a lot of people who are in the traditional in the traditional world, and I should use that, to to the maximum advantage that I can.
Speaker 1:I have one last question, then I'll let you go because I know you're busy today. Yeah. Charlie Munger criticized TransDigm for buying aerospace parts manufacturers and then locking, primes and and aerospace companies into long contracts, raising prices.
Speaker 3:It was a little bit
Speaker 1:of a controversial strategy, but it's performed very well for that company. What is your takeaway from the TransDigm model?
Speaker 9:Trans it's TransDigm is actually a phenomenal business, and, it's actually not the cause of of any of those issues. The cause of the cause of those issues, certification, lock in, etcetera, etcetera, has to do with decisions that were made thirty years ago at, you know, the the top of Boeing, so paper right here, you know, basically pilloring that decision. But the the decision at the top of Boeing to outsource every single thing that they do from from engineering and manufacturing at the component and at part level up to the product level. And so Transyme is just a recipient of the system that was instituted thirty years ago. If you really wanna change things, we believe that you have to start from the from the bottoms up for with the critical products, build your way back up in partnership, with the customer to reverse that type of decision making and culture.
Speaker 9:Got it. So, yeah, my answer to that is I think TransDigm is actually a great business. They have they have run businesses very, very well. They're in an environment that that was not created by them. They have taken advantage of it, but it was not created by them.
Speaker 9:And to to fix that, it's gonna take partnership with a company like Apps.
Speaker 1:Well, that's a fantastic answer. Thanks so much for hopping on on short notice. Congratulations on the round, and we'll have to have you back.
Speaker 2:Yeah. Have so many other questions I've had.
Speaker 1:And we can talk for you. I'm sure.
Speaker 2:Yeah. Very excited for you and the team and excited back have you back on the show soon.
Speaker 9:Thank you. Let's do it. Take care,
Speaker 1:Talk to you soon. Bye. Well, we are moving on to someone from FAI, the Foundation of American Innovation, I believe is what they call it, fai.org, the f a I dot org. I've been to a couple of their events. Very fun.
Speaker 1:Gary Tan spoke at one. Trey Stevens spoke at one. Yep. Stevens has involved. I went to one in San Francisco and there were actually protesters outside, which is kind of fun.
Speaker 1:But they were, like, in in very good spirits and kind of, like, taking pictures of everyone. Was lot of fun. Anyway, welcome to the stream.
Speaker 3:Boom.
Speaker 1:How you doing?
Speaker 5:Hey, man. How's it going?
Speaker 1:It's good.
Speaker 2:What's going on?
Speaker 1:Thanks for hopping on.
Speaker 5:Chill. Your hair game
Speaker 10:is on
Speaker 5:point as usual.
Speaker 2:Chill week chill week for you? You just been
Speaker 5:Sleepless nights, probably on maybe eight hours over three days.
Speaker 2:Oh. Ouch.
Speaker 1:Not good. You should get an Eight Sleep. Go to 8Sleep.com/TBPN.
Speaker 5:I have a Helix.
Speaker 1:No. No. We'll get you to switch.
Speaker 2:Anyway No. It's great to have you on. What what's running through your brain? There's a bunch of things we can talk about, but where
Speaker 5:should we Oh, just the contagion effects and potential collapse of the world economy. Simple stuff like that, you know. How bad
Speaker 7:is this?
Speaker 5:US primacy.
Speaker 1:Are are you a is there any element of cautiously optimistic about this for you or are you just totally black billed on it?
Speaker 5:I mean, my white pills are Lucy's.
Speaker 1:I do I do have
Speaker 5:some of those. But, know, you're gonna need a higher milligram for For
Speaker 1:for this week? Well, what Yeah. I mean, walk me through why why is it so disruptive to you and what you do? And maybe just for the viewers, give a little background on yourself and the organization.
Speaker 5:Sure. So I'm chief economist for the Foundation for American Innovation. Ripping the the swag here.
Speaker 1:FAI. There we go.
Speaker 5:We are a tech policy think tank in Washington DC, originally founded to bridge Silicon Valley and the DC culture. Mhmm. Today, we work on the intersection of national security tech and governance. I focus on AI, but cover sort of all economic issues as well. And, you know, I think we we kind of or at least associate it with the sort of tech right with with you guys like yourselves.
Speaker 5:Like, I'm rooting for y'all, and hopefully, Martin Scrawley's Bloomberg terminal killer takes off so then we can combine combine you guys and completely disrupt So, you know, I think you see this in the administration too. The Trump administration is is a is a series of factions or coalitions and we are definitely, you know, in the mix. Mhmm. But, you know, Elon Musk today called Peter Navarro Peter Retardo and I kinda I kind of, you know, definitely hard to argue with.
Speaker 1:Can you talk about the the bridge between Silicon Valley and DC? It feels like that bridge is massive at this point. There was a moment where maybe tech was drifting away from DC, but now it feels like tech has taken over DC. At the same time, you go back to the Obama administration. I always think about this statistic that I believe the number one organization that was nongovernmental that Obama visited during his eight years in office was Google.
Speaker 1:And so there there was an there was a moment when big tech and DC were tightly intertwined, just happened to be with the Democratic Party. Now it happens to be with the Republican Party. But what is what is the state of the bridge and how did we get here?
Speaker 5:Yeah. I think it's almost like a qualitative difference. So if you think, like, the last eighty years, the power structure in The US as being sort of either Wall Street or like West Texas oil. So we either get like Rex Tillerson or Jamie Dimon.
Speaker 6:Sure.
Speaker 5:And you know, since the internet took off, know, there's this new wealth on the West Coast and as that sort of germinated and matured, it originally was just sort of like one interest group among many, you know. They still still had those two main power elites. And I think with the with this last election, was it was sort of an example of Silicon Valley, at least a part of Silicon Valley asserting itself as its own distinct power center.
Speaker 1:Sure.
Speaker 5:And that that is that is very very different. Of course, you know, all the other power centers still exist to some degrees and so it is it is sort of a constant struggle. I think there's a lot that this administration is doing great, you know, the stuff on energy. I think Doge at some point is going to turn to regulation and that's what I'm most excited about, you know, once we start cutting whole parts of the CFR. You know, back in the day, used to do a supersonic policy and early worked early with Boom Aerospace and and it's good that they're getting a hearing now and maybe I'll be able to fly either coast and flyers rather than I'd love that.
Speaker 5:So so there's a lot a lot to like. It's just and I and I think there's also like a steel man case for like these trade actions, you know. We participate with like the re industrialized conference. We have our own techno industrial playbook that will be coming out in a couple weeks. So we're all on board for the like, you know, America needs to build again and you know, that that especially as AI like deflates
Speaker 1:Yep.
Speaker 5:All the knowledge sectors, like we're going to need more aluminum smelting and stuff like that.
Speaker 1:Is there any glimmer of hope that there's this Mar a Lago summit? I forget exactly what Chamath is referring to, but the accords. And you do see reciprocal tariffs, but they they actually have the effect of driving it down to zero tariffs anywhere in the world, either direction. Are you hopeful for that, and would that be a good outcome in your, economic framework?
Speaker 5:That would be sort of the best possible world. Mhmm. There's also risks associated with that, right? Because, you know, I wrote a piece recently discussing this sort of way the market reacted and, you know, on the one hand you could say, oh, Trump just likes tariffs. And that's definitely true.
Speaker 5:He has a forty year track record of just liking tariffs.
Speaker 1:Yeah.
Speaker 5:But then you have other people like Stephen Moran and Scott Benson, Benson and you sort of JD Vans himself as well, who at various points talked about the the curse of the US dollar being international reserve currency.
Speaker 1:Mhmm.
Speaker 5:And there's a lot of truth to that. Like, fact that, you know, China wants to hold our treasury debt and, you know, we we know they build cheap cars, we build treasury bills Mhmm. Is, you know, it does raise our living standards but means that we are not ready to fight a war. Yeah. And so that is a core a core problem, but then the question is like how do you deal with that?
Speaker 5:And if you do go all the way to a Mar A Lago accord, what you're saying is this isn't just about tariffs. This is about resetting global financial imbalances.
Speaker 1:Yep.
Speaker 5:And you we need to do that, but we need do that sort of gradually. Because if you do this all at once, what that means is the entire floor will fall out of the stock market and the real estate market and, you know, with huge cascading effects through merchant calls and
Speaker 1:Mhmm.
Speaker 5:You know, I think I think mortgage debt is now back to its 2,007 levels. Mhmm. So it's less to me about like the mood or the the ideas behind the policy, but the execution. Yeah. But they're they're this is a rug pull of,
Speaker 2:you Rugged. Can you talk about, you know, the value of the yuan has been dropping? I guess it's at a record low. Can you talk about trade wars turning into currency wars and and, you know, if that's what people should really be focused on?
Speaker 5:Yeah. So, you know, China was a currency manipulator through throughout a lot of the February and early twenty tens, but really that really hasn't been the main way that they they cheat. They cheat by basically suppressing household consumption and, you know, having these 60% saving savings rates. And so they end up building these ghost cities and, you know, whatever technology they enter, whether it's cars or telecom equipment or pick pick your poison, they just overproduce it to the maximum, drive down the the cost worldwide and then have to find these export markets to dump it. Mhmm.
Speaker 5:And and and the way they resolve the way that we're ever gonna resolve this is, especially now that like The US is like is going to have a % plus tariffs on China and Europe doesn't want their shit either. They need to like build up their domestic economy, you know. They need to like reduce their own savings rate, raise the standard of living of their households, like introduce like some basic, you know, social welfare programs or something so that they, you know, actually have a domestic consumer base. And if they do that, like, that's actually the best way that they can retaliate in a sense because they're sort of shielding themselves from the tariffs. But it also is it helps correct the big imbalance.
Speaker 5:And so that that is like it's sort of aligned in that sense where, like, if if China does the right thing, then it's a win win situation. If instead they double down on tariffs and trade war, you know, I I don't see I don't see we just exacerbate the the, the contradictions in the economy and and don't get to a resolution.
Speaker 1:Ben Thompson has been advocating for a rethinking of the CHIPS Act, mainly shifting from export controls, removing those, and instead taking a more operation warp speed approach where the US government is potentially a a massive buyer of domestic made three nanometer, five nanometer chips. With the demand signal there, the American market should solve it. How are you processing the the current Chips Act, and what are you hopeful for going forward?
Speaker 5:I'm not opposed to the idea. The the thing about, like, you know, NVIDIA's chips is their their kind the demand is kind of saturated. Right? They can kind of pick who their buyers are because there's just so much demand for them. And at the same time, they've not been the most sort of like loyal actor in this space.
Speaker 5:And, you know, if there's any sort of big meta narrative or theme to a lot of the right the tech rights move into DC, it's been, you know, since from Project Maven on that, you know, these companies have had corporate social responsibility policies, but not corporate patriotic responsibility policies. And and technology is becoming geopolitical and you sort of have to pick your side. And so, you know, every time we introduce an export control, NVIDIA's two weeks later has a new chip that just gets under the line of what's being controlled. Mhmm. And the latest one is the h 20.
Speaker 5:Yeah. H 20 is an inference chip. It's, you know, it will power these reasoning models. If you're if we're worried at all about search being competitive, I don't think we can give up on those. And in fact, should we should be doubling down.
Speaker 5:And that would be like a smarter kind of trade war than just across the board tariffs. And that but that doesn't have to be mutually exclusive with doing a kind of industrial push. And that that's what I'd like to see. Because if we're going to do this big rebalancing, you can't just pull the rug. You have to, you know, to mix my metaphors, you have to be the Indiana Jones that, like, swoops in the the bag of sand or something as you take the holy grail.
Speaker 5:And what is that what is that, like, new thing that we're going to be swooping in? What is the industrial bank that we're going to be using to bootstrap the industries that we need? They won't just materialize on their own.
Speaker 2:Can you talk about putting the the trade wars in the context of like this race for super intelligence, right? In many ways, people are arguing like, hey, if we're making like transformers like harder to get and more expensive, does that hold us back from winning the AI race and is that the only race that that really matters? You know, we've we've joked on the show about this idea of like picking up pennies like in front of a steamroller, right? Like AIs, you know, has potential to transform the economy in so many ways. And like, it's very possible that like that just winning AI matters more than like winning, you know, the trade war in in the year 2025.
Speaker 5:No. I I I % agree with that take. Like, I can forgive a lot of stupid policy because in four years, we're going to have such powerful AI systems that, like, really, it swamps everything else. And the you know, we know what the bottlenecks are going to be. Right?
Speaker 5:Like, building these models only has a few basic ingredients. You have, like, the the data and algorithms, which The US you know, the algorithms are basically public domain. The the data China maybe even has an advantage because they don't have privacy laws and they can just scoop up everyone's, like, genome or whatever. And, you know, then it comes to, like, energy and chips. The export controls exist because right now our only structural advantage is the chip and hardware stack where, you know, our installed base of NVIDIA data centers is, you know, a huge portion of the world's.
Speaker 5:China's been basically cut off since 02/2022 and '23. Then when it comes to energy, you know, China added 446 gigawatts of energy last year. It was a 20% year over year increase. They're gonna do that again this year. We added zero net new energy.
Speaker 5:We added a lot of renewables, but it came directly out of coal and other sources. And so, you know, the the chips are the chips is the short run bottleneck, so that's why we need to lean into that. And then the long run is, like, how are we gonna supply the energy? And then as the stuff diffuses, you know, to the people who worry about deindustrialization, it's like it's true. The last forty years, we've specialized in, you know, higher education, knowledge work, legal management services, Hollywood, you know, the creative class, all the stuff that is going to, like, be deflated.
Speaker 5:And China will have the factories that will become fully automated in due course because they'll also have the workforce that they can extract all the, like, the tacit knowledge out of and put into their robots. Mhmm. And so we it's like a really urgent thing that we don't just like try to win on AI, but like win on AI plus heavy industry and robotics because otherwise, our our innovation in bits will be their innovation in atoms.
Speaker 2:Yeah. The the one point of view on the trade war and trying to bring manufacturing back to America is like, yeah, we can bring the production capacity back, but will the jobs come back in the same way, right, just due to if we if we actually want to scale production, we need to lean into automation and robotics. How do you think about job creation as part of reshoring and and increasing domestic production in the context of long term, a lot of production just becoming automated and and just just because that's gonna be the most efficient way to produce the most amount of goods.
Speaker 5:Yeah. We need to bring back manufacturing, but it's not it's not a jobs program. That's for that's for sure. In fact, the only way we're gonna bring it back is if we automate significant amounts of this. And maybe the guy who presses the on button every morning gets paid, you know, multiple six figures, but it's not gonna be this nostalgic vision of, like, nineteen fifties where we're all going into the factory.
Speaker 5:And that's just, like, a structural thing. We're not that, you know, AI is gonna do that for a lot of stuff, probably, you know, most stuff at some point and we're gonna have to figure out what the new jobs are. Like I saw on, you know, I saw a video of like professional back scratcher, you know. I know in the VC world, those exist already, but like but this was like a woman of long acrylic nails and some, you know, maybe we can start growing her nails out. Crazy.
Speaker 5:So the
Speaker 1:case is that there's like there's there's so much knowledge work to do around an advanced factory. I mean, we just talked to Jay from the Advanced Manufacturing Company of America. There's clearly a lot of high skill labor that
Speaker 5:Yeah. Yeah.
Speaker 1:Is not getting displaced anytime soon that, could
Speaker 5:That's not millions of people.
Speaker 1:Yeah. Well, it might be if we're manufacturing a Dyson sphere with a million robots or something.
Speaker 8:I don't know.
Speaker 1:I I I could see a world where, yes, there are a million jobs in the manufacturing sector, but it's all at the higher level.
Speaker 2:But if 2,000,000, you know, traditional, like, white collar
Speaker 1:Yep.
Speaker 2:Jobs get evaporated, you know.
Speaker 1:In the interim.
Speaker 2:Maybe. There's clearly some big big questions we're gonna have to be thinking about. Yep. Do you have strong opinions on Unitree or any of these other Chinese robotics companies that are trying to He's just like,
Speaker 1:I love
Speaker 2:them. Yeah. I'm curious if you've written about it, if you had policy recommendations that you or FAI have made around some of these more hybrid sort of dual use. Well, everything's dual use in China.
Speaker 1:But dual use back scratch. No.
Speaker 5:The the unit tree is really impressive. And, you know, I've seen it do like kung fu and it it it does break dancing better than that Australian lady.
Speaker 1:Oh, yeah.
Speaker 5:Not that high. Yeah. And but the, like, you know, if you, you know, Shenzhen is like, you know, it's like going to a flea market where every you trip over, like, baskets full of microelectronics. And we need to be building some of those, like, ecosystems in The US. That's number one.
Speaker 5:And number two is like, yes, we have the the data centers and the the better models, but China, you know, has the batteries. Right? They they they have, like that that's one area where they they have leapfrogged us. And whether it's electric vehicles or robotics or drones, like we need to have our own battery stack. And maybe, you know, we do need like a chip stack too, but we also probably need like a batteries act to to compete with that because other like that that will be the thing.
Speaker 5:It's fine if if, you know, Andoril builds a drone factory, but where are these batteries gonna be coming from?
Speaker 2:Yeah. Not asking for financial advice, but where where are where specifically in America are you long? You know, areas that could be that sort of American Shenzhen or maybe it's multiple places. I'm curious what what what areas that, you know, you know, different regions in The United States are are do you think benefit from reshoring most intensely? In
Speaker 5:recent history, it's been sort of the the South And South Atlantic, you know, the North Carolinas, the Tennessees, Nashville, you know, partly because those those also have the best housing markets. Right? It's so much easier to build when you have a greenfield.
Speaker 1:Mhmm.
Speaker 5:But longer term, this is also not something that The US can do alone. Like, we're going to need almost like a North American plus, like, production, you know, frontier where, you know, you know, let let's figure out the thing with Canada. Like, do we need their lumber? Don't don't we? You know, whether they're gonna you know, do we want their bags of milk or not?
Speaker 5:But like, we we do need their aluminum. Right? And we will need to have some kind of integrated production ecosystem to be kind of competitive and and stand up to China. Because China, you know, already in purchasing power parity is is larger than The US. And, you know, the they want to gobble up their neighbors too and get even bigger.
Speaker 5:So but I I do think there is an opportunity here because when you do have like I'm not a technological unemployment guy, you know, I think new jobs get created. They'll just be very weird, not necessarily in the sectors that matter the most. Like, the the purpose of heavy industry and robotics is, like, more military and, like, do we control the supply of cork goods and services?
Speaker 3:Mhmm.
Speaker 1:On energy, what do you think the lowest hanging fruit is in terms of energy deregulation? Should we be focusing on the NRC, nuclear? What what's the biggest, opportunity to help us jump from, I guess, 0% to 20% where I wanna be? Maybe 40% would be nice. Maybe 200%.
Speaker 5:Yeah. You guys should definitely have my colleague Thomas Hockman on to talk about this for a full half hour because he's he's been putting up the winds lately. We've helped pass a bill in in Utah. There's activity in, like, Arizona and Montana and other places. There's a huge appetite to unlock America's energy.
Speaker 5:In the short run, especially for these data centers, it's going to be natural It's it's it's it's gonna be a bridge to more permanent base load energy. And then the next bottleneck is, like, the grid itself. Because if you want to do a even if it's just, like, natural gas generators rolling in, like, it that investment makes way more sense if you know that after that, you know, GPT seven is trained, that you get to put your energy back into a grid and have have have customers for it. So that needs to be fixed. Other energy sources, you know, I think enhanced and advanced geothermal are they were underrated.
Speaker 5:I think their people are starting to finally wake up to the potential, you know, with with, like, real like, really advanced geothermal. We could make, like, everywhere in America, kinda like Iceland where, like, you know, you have energy under your feet. And then with nuclear, you know, there is this case before the courts that I think it's Texas, Utah versus US government that the Inner Sea that argues that the Inner Sea doesn't have jurisdiction over small modular reactors. Mhmm. And there's I think there's a good chance that this that Pam Bondi and the attorney general settled that case.
Speaker 5:Mhmm. In which case, states could then stand up their own licensing boards. And I think there's actually already movement in Utah to have their own, like, nuclear regulator. And so that could that could happen sooner than people realize.
Speaker 3:That'd be fantastic.
Speaker 2:Do a bunch of young founders building, you know, small nuclear reactors, does that scare you? Does that keep you up at night or do you think the technology is, you know, it solves
Speaker 1:Doug Burnout Radian isn't that young. He's got kids. I trust him with my life.
Speaker 2:Yeah. Basically, I think it
Speaker 1:should be
Speaker 5:a requirement. Should be you
Speaker 2:should have to have kids to be a nuclear fan.
Speaker 1:He worked at SpaceX. He's got the pedigree. I love that company based in El Segundo.
Speaker 5:The big problem with nuclear is it it's it doesn't really pencil out without, like, large government support.
Speaker 1:Yeah.
Speaker 5:And and so I would love to see, like, the 600,000,000,000 in tariff revenue, you know, be given to Doug Burgham to, like, build a reactor template and build 200 of them all around the country. And, like, make this a you know, use every national security, national emergency trick in the book to get it done as quickly as possible. But it does need it it will need, like, some kind of, you know, fixed capital backstop to to make those investments, at least at least with the current technology.
Speaker 1:I mean, given what you're what like, you're kind of optimistic about Doge. You're it seems like you're pretty bearish about the tariffs. Like, are we in a regime where you trust the government to do mega projects yet? Because I think everyone was excited about the moon landing. And then, since then, a little bit less excitement about the big projects.
Speaker 1:High speed rail in California has been a little bit of rough go, And I don't really wanna see a California high speed rail of 600,000,000,000 get burned on a nuclear strategy that doesn't produce a single watt of electricity for Yeah. You know, years or something, which would be like the bad case.
Speaker 5:Yeah. A %. You know, the stake capacity and competence is really, you know, it's the jagged frontier. Yeah. Places that have a lot of it.
Speaker 5:Places that have a little of it. You know, would have more, you know, I would have more trust in in a Burghum or like a Chris Wright of actually executing on something like that. It wouldn't be the Pete Buttigieg slush fund where it's just filling potholes in Indiana. They would know how to cut through the road tape. They wouldn't make it like this, you know, everything bagel, you know, we're going to build TSMC chips, but then also like, you know, rehabilitate justice involved individuals.
Speaker 5:You know, we need to keep our potatoes and our gravy separate.
Speaker 1:Got it. How are you thinking about the deep seek versus Meta's llama strategy? We were talking about that earlier on the show, and it's kind of hard to to I I think a lot of people on on the vibes of deep sea, they're like, I don't like this. But then it's difficult to formulate an argument because are you anti open source? In which case, are you anti Zuck and Meta?
Speaker 1:How are you thinking about the intellectual property that's being developed in America around large language models and then makes its way across the Pacific Ocean?
Speaker 5:I think what I find most impressive about DeepSeek is less the the model they put up, but just that they sort of have imported a kind of Silicon Valley model of like
Speaker 3:Yeah.
Speaker 5:And then that that that came from the their CEO being like a hedge fund manager doing this as a side project. It's very you know, Sam Altman wasn't a hedge fund manager. He was a VC, but sort of analogous, Yeah. That that that's striking because it's just a different model of corporate governance than you're used to seeing. And I think there's a question of like, how long does DeepSeek become a victim of its own success?
Speaker 5:Like, you know, they are the top poppy. And it's not that China tries to hurt them because of that, but actually tries to help them and makes them a national champion and thereby sort of perverts it. But, you
Speaker 1:know It's a fun take.
Speaker 5:They've been great at, you know, publishing what they're doing and everything sort of has checked out. But they don't have the chips and they've said they've said that. Like, their CEO said their biggest bottleneck is hardware and so we shouldn't help them on that front. Like, there's $16,000,000,000 of orders for h twenties just sort of sitting in limbo about to go at the door. The commerce department has Howard Lutnick has said that he's going to export control at age 20, but they're so distracted by tariffs, they haven't prioritized it and the time is kinda running out.
Speaker 2:Mhmm. What's going on with TikTok? We've been following the poly market around a new band before May. There's markets around, you know, potential buyers, things like that. Do you have any insight that you can share around the latest there?
Speaker 2:It feels like, again, one of those things that's just like not getting the attention and the focus because obviously, you know, if we are enter into the greatest global trade war of all time, like, yeah, it's rightfully people should maybe be sort of focused on that. But at the same time, it feels like something that we were supposed to have answers around by now and we definitely don't.
Speaker 5:Yeah. Totally. I I mean, FAI, we we led the charge to to ban TikTok over a couple years. And I fully support it. I also enjoy TikTok, but I I do notice that, like, between my my my barbecuing steak videos and, like, funny memes, I'll get, like, a Pyongyang tourism board video now and then.
Speaker 5:It's like, well, I don't plan on visiting North Korea anytime soon. But Yeah. Yeah. I don't have any super deep intel. You know, there has been talks about or rumors about, you know, Oracle maybe being part of this, and I think Trump still wants it to be part of the new sovereign wealth fund.
Speaker 5:And actually, as sort of zany an idea that is, like, he kinda has a point. Like, if TikTok became American and, you know, quadrupled in value, that would actually help pay down the
Speaker 2:debt. Mhmm.
Speaker 5:The interesting thing here is, like, you know, people have pointed out that, you know, Trump is sort of placed fast and loose with with the constitution, with the law, stuff like that, you know. And and matter of fact, all the people he's fired, totally constitutional. The biggest, the most unconstitutional thing he's done today is not enforce a TikTok ban.
Speaker 1:Oh, yeah. That's funny.
Speaker 5:Because that was a direct, you know, statute that Congress passed that said, thou shalt ban TikTok. So I I I'm hopeful that they can get a a deal. The reason I I would just doubt it is is China has very strong expert controls. Like, the reason TikTok can't sell is because algorithms in general are expert controlled. And so they would be able to buy the brand name and like the the offices, but they have to completely re revamp the algorithm, which is like the the secret sauce of
Speaker 1:Yeah.
Speaker 5:The thing. Now TikTok is in our building in DC, so I can I can try to plan a bug for you if you want?
Speaker 1:Sounds great. Well Pretty fantastic. Polymarket has the chance of TikTok being on the App Store in on May 1 at 97%. And, who will acquire TikTok? Oracle's at 27%.
Speaker 1:Number two, Larry Ellison directly at 24%. You'll love to see it. Amazon's still up there. But I just wanna say thanks so much for joining. This is a really interesting conversation.
Speaker 1:We'll have to have you back soon.
Speaker 2:Yeah. And get some sleep.
Speaker 5:Yeah. Get some
Speaker 2:We gotta get we'll we'll work on getting you an eight sleep. Yeah. Yeah. Start putting us some proper sleep.
Speaker 1:I wanna see a hundred for a week straight. Think
Speaker 2:Yeah. Less red light from the stock market. More Yeah. Yeah. More sleep.
Speaker 2:Anyway, go go get some sleep. Thank you for for coming on and
Speaker 1:I appreciate
Speaker 2:it. The next one.
Speaker 1:Yeah. Talk soon. Bye.
Speaker 5:Later, man.
Speaker 2:Cheers.
Speaker 1:Next up, we have Shiel coming back on for a second TBPN appearance. We're gonna talk about fintech, the markets, the tariffs, his dust up with another capital allocator on Axe the other day. Had a lot of fun with that, and I'm sure we'll have plenty to talk about. So as soon as Shield gets here, we'll bring him into the studio. But, those are some interesting questions.
Speaker 1:There really are so many debates right now about China. It's like DJI, Unitree, TikTok, deep sea. There's, like, seven different really important questions. Maybe we'll talk about it with Shield. Maybe we won't.
Speaker 1:But let's bring him in to the studio and welcome him to the show. Welcome.
Speaker 2:Boom. Back with a suit.
Speaker 1:Looking great. How are you doing?
Speaker 2:Looking good. Looking good.
Speaker 8:No no Apple Watch.
Speaker 1:Apple There we go. We'll get you out Bezel now. That's the next step. We've we've we've deradicalized you from the Apple Watch. Next is radicalizing you to Bezl.
Speaker 2:The tariffs haven't hit the secondary market yet. No.
Speaker 1:It's great buying opportunity. This is financial advice. Go to getbezel.com. Download the app.
Speaker 2:Just for shield.
Speaker 1:Check out a whole just for you, not for the listener. For you specifically. You specifically. I wanna see Adam R. Piguet or or Royal Oak on you, something like that.
Speaker 2:Alright. Well, you've had a bit you I feel like you've been the timeline's been in turmoil and you've been at the center of Timeline's
Speaker 3:been turmoil. In turmoil.
Speaker 2:Yeah. I guess I guess Chamath knows who you are now. Yeah. Now that that people reminded him that he would use your content in his newsletter. Yeah.
Speaker 2:But That's so funny.
Speaker 8:Whole thing.
Speaker 2:The whole the whole thing was the whole thing was funny. I'm I'm sorry. I'm I mean, I I honestly probably good Yeah. Metrics I'm sure were up into the right. So Elon bucks is slowing.
Speaker 2:The creator payout this month Elon
Speaker 1:bucks are slowing.
Speaker 2:It's gonna go from 200 to a hundred Except he
Speaker 1:she'll just pay this for for screenshots of his content with just this is why I can't believe this Twitter this this app is free, and this is why I'm never deleting this. I'm never leaving this app because of the interaction. Exactly. Anyway, yeah. Can you give us just your your high level reaction, how you've been processing the tariff news, kinda set the table for us, and then we'll dig in?
Speaker 8:Yeah. Wow. Right into it. Okay. I'm kind of like I've always been more of a free market kinda guy.
Speaker 6:Mhmm. An American.
Speaker 8:Think free market American. Yeah. American. I've been an American guy. And I I tend to think competition makes us better.
Speaker 8:And I also, like, spent time, like, living in a protectionist India.
Speaker 5:Sure.
Speaker 8:And so for those who don't know, like, until the nineties, India was a closed economy. Like, they had super high tariffs on all foreign goods, and it sucked. There were there were two local car manufacturers, and the cars were, like, built in the fifties, and they, like, didn't get any better from the fifties until the nineties Yep. Because India had so much protections on their local car industry. Yep.
Speaker 8:And that was terrible. So, like, they didn't innovate. They never improved the quality. They were super high so people couldn't afford the prices were super high so people couldn't afford them. And so that's what really scares me.
Speaker 8:And then you might say that would never happen in America, but you'd be totally wrong because that's exactly what has happened in The US shipbuilding industry. Mhmm. So, like, the Jones Act basically says that if you if you're shipping goods between two US ports, you need to use a US built ship crewed by US citizens and owned by US citizens. So, like, it's it's super protectionist to The US shipbuilding industry, and US ships suck. They're, like, five times as expensive as other ships, and they've never had to innovate because they have these protections.
Speaker 8:And then it totally distorts the markets in general, like, on the East Coast I'm in
Speaker 7:New York right now.
Speaker 8:On the East Coast, a lot of, like the East Coast suck gets some fuel from internationally because it's easier to ship here than it is to get it from Texas. And that's just, like, a perversion of markets that exist because of the Jones Act. So, anyway, so I think, like, all these things, like, I'm totally anti protectionist. There's a question of, like, what is Trump doing is, like, with it's not really a reciprocal tariff. Now everybody realizes.
Speaker 8:Mhmm. It was a funny situation last Wednesday when people were like, what the fuck are these numbers? Mhmm. And then, you know, the guy who did the math was like, oh, this is about our trade deficit, not reciprocal tariffs. Mhmm.
Speaker 8:I think, like, if you if like, now people are coming around and saying, oh, this is all about lowering trade barriers. I think that's bullshit because, like, you have Lutnick saying, like, we need millions of Americans screwing in tiny iPhones or whatever. And and you and they also say that the tariffs are gonna replace income taxes. So if those things are true, then it's not about leveling the playing field to zero. It's about, like, putting these tariffs in place to reshor.
Speaker 8:And I I personally don't like that.
Speaker 2:Trump also said something too.
Speaker 1:I'm sorry. Go ahead.
Speaker 2:There's this focus on trade deficits but we're completely ignoring like services and specifically like digital services. Right? So it's like like like Switzerland for example, you know, we have a trade deficit because they have eight ish million people and we have hundreds of millions. Yeah. And then they like make all the world's fine watches which we just talked about.
Speaker 2:But then like they also probably love Netflix. Yeah. I guess that like a lot of people in Switzerland are are subscribed to Netflix and we're just like completely ignoring all of that. Yeah. And and, you know
Speaker 1:Yeah. I I have a But
Speaker 8:we're the richest, most prosperous country in the history of the world. Yeah. It's fucking awesome. We can afford to buy all their shit. Like, don't need to buy stuff from us.
Speaker 8:They can't afford it.
Speaker 1:We can't afford it. Yeah. So the steel man here is like, first off, do you think DJI and the consumer drone market is a problem? And then if so, what is your solution if not just ban DJI, tariff DJI? Like, we did a deep dive on GoPro versus versus DJI, and it really just felt like China was like, we are going to kill GoPro in the drone market.
Speaker 1:Yeah. And they put so much firepower behind it. And I'm like, I still get that there's some weirdness going on here, and it's an important industry, and it's dual use, and there's a million different factors. So how do you walk through that specific example? Narrowly, let's let's let's take away the blanket of tariffs.
Speaker 1:Walk through that for me, and and how would you solve this in a more free market, more progressive fashion?
Speaker 8:Yeah. So that's a great question. So so first, like, we have tariffs in every country, not just our enemies
Speaker 1:Yeah.
Speaker 8:With Trump. But China specifically, I I do think China's playing unfairly.
Speaker 1:Yep.
Speaker 8:And And they're our enemy. And we we shouldn't let our enemy get data on The United States. Like, that could be really bad. Yeah. There definitely are national security issues with drones.
Speaker 8:Yep. I also think we should ban TikTok.
Speaker 2:Sure.
Speaker 8:And so I think those things can be dealt with, but they have nothing to do with tariffs.
Speaker 1:Yeah. I guess one of my one of my scenarios would have been if I could if I could replay everything with everything I know now, maybe you you see what's happening with DJI and and GoPro, and you say, hey. We are the richest country in the world. We do have China buys a bunch of our debt. Let's lever up essentially and create a drone buying program from the government to stimulate demand for American made drones.
Speaker 1:Totally. Essentially backstop GoPro. Let them get down the learning curve. Hey. If they make these drones in America, we're gonna buy them even more.
Speaker 1:And and and let us develop that. And then we are competitive, and we say, hey. It is a little bit we're we're still shifting the invisible hand, putting our hand on top of the invisible hand, but, it's still somewhat of a free market in the sense that, like, just like what we did with EVs with Elon, like, anyone could have gone for those electric vehicle, incentives. Elon did a great job taking advantage of it. Like and we got a great product.
Speaker 1:You know, it sold really well eventually.
Speaker 8:Yeah. I think that's absolutely right. I think and, like, look. We we moved in this direction already. Yep.
Speaker 8:Like, the CHIPS Act and IRA both
Speaker 1:Yep.
Speaker 8:Did did make good moves.
Speaker 1:Like Yep.
Speaker 8:They they enable they subsidized US chip manufacturing that are critical, for military systems and and other stuff. I think they they have made some moves away from foreign chips. Mhmm. And so that all that all stuff is good. I think, like, leading with a carrot is way better than leading with a stick, personally.
Speaker 8:And I think, like, the the ideas you you mentioned, John, are spot on. And by the way, like, we helped Tesla along the way. Like Yeah. We loaned The US taxpayers loaned $500,000,000 to Tesla. Yeah.
Speaker 8:Like, that kind of thing, I'm totally in support of.
Speaker 1:Totally.
Speaker 8:Enable US manufacturing to be better, to compete on an even playing field by being more innovative, not by blocking, like, other countries from from competing.
Speaker 1:Yeah. Yeah. Jordy?
Speaker 2:Bummer to see the IPO window close. We had Klarna, StubHub. We'll see if Circle gets out. Klarna would have been especially nice for fintech broadly to get some marks.
Speaker 1:Circle's still at 86% on poly market for this year.
Speaker 2:Yeah. They might be like, you know, crypto where we were born in the darkness. We're going out. We're going out no matter what. Did you did you have a take on the Circle IPO in general?
Speaker 2:I I saw a lot of people just were not kind of loving the S1 particularly just based on how much they were paying Coinbase to distribute the I'm curious if you had a take on the IPO or dug into it at all. You don't need to have any knowledge to have it. You don't need to have any knowledge to have a take
Speaker 1:by Yeah.
Speaker 2:That part of
Speaker 8:Yeah. No. I I would say, like, on Circle in particular, like, I saw all the same stuff you saw.
Speaker 3:Mhmm.
Speaker 8:Coinbase gets half of the revenue from Circle's token and all this other stuff. But I don't I don't have a strong take on how the IPO will perform. I I tend to think that these things are somewhat, like, initially somewhat disconnected from the reality of what's going on. So, like, I think, you know, we talked about Bridge last time I was on, and I think there became this stablecoin hype. And I bet if Circle was public at that point, they would have gotten a huge bump for no particular reason.
Speaker 3:Totally. Yeah.
Speaker 8:But I think overall stock market yeah. Like, what does what does Polymarket say about Klarna? Is there a market for that?
Speaker 1:Oh, I don't know. Jordy, can you
Speaker 2:Well, Klarna, I think, officially pulled their
Speaker 8:They pulled. Well, like might go back. Thing is these tariffs are especially especially bad for Klarna. Right? Like, it's consumer discretionary spend that you use BNPL on, and consumer discretionary spend in a recession or with high tariffs, like, goes goes to the toilet.
Speaker 1:Yeah.
Speaker 8:So, like, you're not buying that extra $2,000 item that you didn't exactly need. Yeah. And that's what you were BNPL ing anyway. Yeah. So I think we've seen a firm stock.
Speaker 1:Oh, yeah. I think our
Speaker 8:firm stock got cut in half.
Speaker 1:Yeah. I wanna go deeper on Circle because I feel like it's one of those companies that if they're about to IPO, I can't even name the founders. I don't know all the big investors. Like, it's this it's this fascinating this is a case with a lot of, a lot of crypto companies. But even Bridge, like, we heard the story of, like, who made the money on this.
Speaker 1:Okay. They got acquired by Stripe. Like, they're very much in the Silicon Valley world, and Circle obviously is, but has been hasn't really told their story in the way. And so it's interesting. They they could have a meme stock moment where it's like the it's the primary way that you get exposure as a public markets investor to stablecoins broadly, I guess, and that could be a good narrative.
Speaker 1:It could just be a meme stock because, hey. Crypto. It's you know, whatever. But but they haven't really told their story in a way that's broken through, at least with me. I don't know if you if you've processed it any differently.
Speaker 1:But
Speaker 8:Yeah. I think it's certainly less less hyped than a lot of the others. The CEO, Jeremy Allaire, I I went to a stablecoin conference a couple months ago, he spoke. And and so he's very sharp. And he's he's been at it for a very long time.
Speaker 8:He's, like, of a different slightly different generation than us. It's like he he started a company that IPO'd in in, like, the nineties, like, .com boom. Wow. And then he was actually a venture capitalist. Like, he worked at General Catalyst for a little while.
Speaker 1:Oh, cool.
Speaker 8:And then and then launched Circle, whatever, ten or fifteen years. Not obviously, not fifteen years ago. Yeah. Bitcoin didn't exist, but but, know,
Speaker 1:a a while ago. To launch a stablecoin before Bitcoin. Yeah. It's really early. I mean, there there were multiple companies.
Speaker 2:Here's the bull case for USCC and like and and it's here here's my bull case. Sure. So Tether is like the most profitable financial institution ever. Right? Like, it's literally what what what are
Speaker 1:they is it is it 8,000,000 dollars? Per employee or something in profit.
Speaker 2:I forget the actual you probably shared it at some point, Shiel, but it's like some absurd number. They're more profitable than like any of these other major financial institutions. The risk with Tether is like it's opaque. Don't fully know what's going on. Like there could be people for a long time said there could be systemic risk.
Speaker 2:Mhmm. You know, associated with Tether that they they've been accused of of a lot of stuff over the years.
Speaker 8:Constantly. Yeah.
Speaker 2:Yeah. Yeah. But but they're dominant from a market cap standpoint. The second the second biggest stable coin Mhmm. Is is USDC at a $60,000,000,000 market cap and a market cap is obviously just one to one with the supply.
Speaker 2:Right? And then you go down the list, the next one is Dai which is also, you know, run-in this very like crypto native way from what I know. And then you are to to get to the next stable coin made from a sort of true traditional western institution, you have to go to First Digital USD Mhmm. Which is under a $2,000,000,000 market cap. And below that is PayPal USD USD which is an $800,000,000 market And so to me, I'm looking at Circle and it's like, here's like the Power law winner.
Speaker 2:The dominant, you know, they're 80 they have 80 times the circulating supply as their next like regulated, you know, western institution. And they have USDC. Like, it's a pretty good ticket.
Speaker 1:And I mean, crazy part is
Speaker 2:we talked to It's not super sophisticated, but but it you know, brand matters.
Speaker 1:I mean, we talked to Zach Paray at Plaid, and we were like, if you had, you know, full authority, you were like the president, could you speed up wire transfers in ACH? And he was like, absolutely, but it's not gonna happen. And so it's like, yeah, maybe stablecoins are here to stay and and all the the pitch about just, hey. It's just gonna you're just gonna be able to transfer money two days faster. Like, that's enough even though it seems like you should just be able to speed up the
Speaker 2:How do you government transfers. What's your what's your read on any sort of predictions on venture right now? I think the lesson in venture since 2020 has just been take advantage of chaos, invest through market cycles, never stop deploying. I remember in 2022, we were talking about the, what is it? Denominator effect.
Speaker 8:Yeah. Denominator effect,
Speaker 2:yeah. Denominator effect, but then it didn't fully play out. We saw this, I mean, we saw like a, again, a bifurcation of like the big funds raising all the money on paper. But, you know, if you're a specialist fund with like a strong story, you can, you know, still Yeah. Get get funds done.
Speaker 2:But I but I'm curious first about the venture market and then I wanna kinda ask more about portfolio stuff.
Speaker 8:Yeah. First, like, maybe a a fun tidbit for you guys is, like, just the last few days, obviously, markets have been in various states of turmoil. And venture capitalists are some of them are, like, trying to seize the day where, like, for example, there have been a couple companies in our portfolio that some investors have been really trying to invest in, but the companies are well capitalized and don't need the capital. And now the investors are like, hey. Market's in turmoil.
Speaker 8:Might this be a time that you would consider taking my money?
Speaker 7:Yeah.
Speaker 8:So, you know, every
Speaker 2:Yeah. Yeah.
Speaker 3:That you
Speaker 2:don't have to be be
Speaker 1:You're really living up to the vulture capitalist name. I like it. Making money.
Speaker 2:Yeah. How do you how do you even think about there's gonna be some enterprising founders that are like, look, I'm building a startup around that's that's, you know, built to help solve, you know, global supply chains or or something like, you know, the chaos is a ladder. I'm gonna take advantage of this. To me, it's like, okay, if we're entering this sort of protectionist
Speaker 10:Yep.
Speaker 2:Phase of deglobalization, maybe it's too early to make bets. But, yeah, how do you do you see companies like actually being able to make something out of the chaos? Or are you just telling your your portfolio, you know, just stay focused on the customer, ignore the noise, that kind of thing?
Speaker 8:I just really like stay focused on the customer, ignore the noise. I think we don't have any companies that are, like, super exposed for some reason or any other. I saw you guys have Jay Malik coming on Yeah. Later today, which sounds like he timed that perfectly.
Speaker 1:Yeah. It is crazy.
Speaker 8:Yeah. Like, literally
Speaker 1:I mean, there's a few of those companies that have been I mean, that's been the thesis for a while, just general, reindustrialization, but, they really hit a royal flush this week.
Speaker 8:Yeah. So, you know, it's mostly stay the course. I think they're like, people are saying, okay. Venture capital dollars are gonna decline. But as you know, like, the way it works is we raise a fund every few years.
Speaker 1:Yeah.
Speaker 8:And we have plenty of capital. So, like, it's not like, you know, an LP there's some impact on the markets today, and that means we don't have money tomorrow. It's like, if there's any impact, it's a few years out, though it doesn't change how we invest. Now Sure. The later stage investors, it is a different equation because for them, they have a certain timeline.
Speaker 8:They're hoping these companies go public. And if the public markets are kinda frozen, that makes things difficult. And and, like, they're thinking on an IRR basis, like, has their opportunities have declined if if they can't if the companies don't get out in a reasonable time.
Speaker 1:Yeah. So mostly just texting founders in the portfolio. Have you seen this with a with a screenshot of the market deck?
Speaker 8:That's just what I always do.
Speaker 1:Yeah. Exactly. I wanna get your reaction to this, post from Semel over at Haystack. He says seed is again gonna be the hot zone where nearly every VC fund will want to invest. Just like when COVID struck and in the early and in early twenty twenty two, VC shifted early to balance large checks by firing $3,000,000 seed bullets.
Speaker 1:LPs should expect median seed entry prices to be up 50% in the next vintage. Does that seem like a good take, or what's what do you think?
Speaker 8:No. I like, I love Semol.
Speaker 1:Yeah.
Speaker 8:But I don't know if I buy that.
Speaker 1:I think that's high. Right?
Speaker 8:That's the thing.
Speaker 2:Yeah. Please. Don't Please.
Speaker 1:Please. Please, man.
Speaker 2:Don't tell the founders. Don't tell
Speaker 1:the founders. They're just like, wait a minute. I I can raise the my my safe by 50% with one stroke of a pen? Let's do it.
Speaker 8:Yeah. It doesn't it doesn't make sense to me because so we we started this fund in 2019, and, actually, like, the seed valuations twenty twenty twenty one was an insane time, especially in fintech. Like Mhmm. Everything we were investing in seemed like it was, like, turning to gold and then, you know, and then maybe turning to shit afterwards. But
Speaker 1:but Classic.
Speaker 8:But but then but actually, like, seed valuations have actually increased from that time. And it's basically kind of been as, like, a straight line upwards. And what he's talking about actually started happening in 2022. And a lot of the funds invested at seed in companies you know? And and the problem is if you're a multibillion dollar fund and you write a $2,000,000 check-in the com to a company and you invested in the wrong company in the category, like, you don't get a chance to write a $250,000,000 check into the right company.
Speaker 8:Mhmm. So I think it's pretty foolish when those funds invest at seed. And we have a bunch of examples now of, like, of friends of ours who took money from a multistage, but, like, the multistage doesn't care about them that much because it's a small amount of money. So I don't know. I I'm skeptical that this is gonna happen again, or that it's gonna really accelerate and prices are gonna go up.
Speaker 8:I don't know. We'll see. Well, we should make a bet and have Peter Walker from Carta give us the data in a year. I love his stuff.
Speaker 1:Can you talk about, there there's this meme of, like, oh, for a while, if you're building a consumer or something, like, you're get steamrolled by what if Google builds it. Right? And there's this story that Google is allegedly paying some AI staff to do nothing for a year rather than join rivals. Hilarious. I want your reaction to that.
Speaker 1:But then I also wanna know, like, is there does that meme exist in fintech? Is there's is is there an idea that, oh, AMX or Visa or JPMorgan are gonna build this? Has that ever actually happened in practice?
Speaker 2:Well, yeah. And even potentially on that, I'm curious, like, OpenAI wants to run your entire life. Yeah. Yeah. You heard any sort of like rumors or is any concern around people saying, oh, I'm building a consumer agent, you know, for financial Consumer financial services.
Speaker 2:Yeah. But then OpenAI might be like, oh, by the way, we launched a partnership with Chase and or we launched a partnership with Cash App.
Speaker 1:Yeah. We can analyze your credit score now with an agent and that that model that that rapper company got got Steamrolled. Yeah. What what what's your take on all that?
Speaker 8:Okay. So first thing I think you said was the rest invest situation
Speaker 3:Yeah.
Speaker 6:Where
Speaker 8:and so I I thought it was really funny because you guys watch Silicon Valley, the TV show?
Speaker 1:Oh, yeah.
Speaker 8:So good. And there's that obviously, there's a the the phrase rest invest, I learned it from that show.
Speaker 1:Yep.
Speaker 8:And it's certainly playing out the
Speaker 2:whole I had no idea that that that phrase was, like, popularized in some way by the show.
Speaker 8:Oh, yeah. Yeah.
Speaker 2:Oh, I thought it was a 2021, like, big tech thing.
Speaker 1:No. No. No. No. This was the thing going
Speaker 8:back, like, a decade. If you haven't seen it, you gotta go back and watch. It's so
Speaker 2:So so I never could I never could get into it because I it it was just too close to
Speaker 1:reality. Yeah. Same thing.
Speaker 2:Like, the most, like,
Speaker 8:it was not like
Speaker 2:I I watch TV because, like, I wanna, like, you know, not think. Yeah. Then Totally. Watching Silicon Valley is like, oh, that's an email I need to reply to. Exactly.
Speaker 2:I I should follow-up with that founder.
Speaker 3:You might like
Speaker 1:My first company, Soylent, was in the intro to Silicon Valley, like, in the intro sequence. And they're just, like, making fun of me every single day. I'm like also, one one of the creators went to my high school, and so, like, I knew him, and he's, like, actively poking fun at me every single episode.
Speaker 8:Amazing. That's amazing.
Speaker 1:It was great. But yeah. Yeah. It's very silly that Google would let this even leak out. I don't know how that happened.
Speaker 8:But Totally. It's ridiculous. I mean, the the things you hear out of Google are so crazy.
Speaker 1:It's wild.
Speaker 8:I think more more so than any other big co like, my wife works at Meta, and they've,
Speaker 1:like Yeah.
Speaker 8:Really got their shit together, like, the efficiency Yeah. Stuff like that. I think probably before then, it was had stuff like this, but not now. Okay. So that was that was topic one.
Speaker 8:I think topic two was, like, what if x company builds this, and is that the case in in fintech?
Speaker 1:Yep.
Speaker 8:I I I don't think so. Like, in fact, you actually had look. One one of the sponsors ramped. Mhmm. So Stripe had built RIM.
Speaker 8:Mhmm. Stripe had a corporate card. Mhmm. And it didn't work. Like, they ended up investing in Ramp and deprecating that card.
Speaker 8:Mhmm. So I think people have tried to do stuff. There is the what if Stripe does this? What if Plaid does this? And there are in some cases, I think that's totally valid, but for the most part, I think there's plenty of green space out there.
Speaker 8:And, you know, Stripe has been acquisitive. Obviously, we talked about that before. Bridge. There's nothing I'm super afraid of. I will say in some categories, like, for example, in wealth management, there was the wealth fronts and betterments of the world, the robo advisors, and people said, okay.
Speaker 8:Like, we're not charging 2%. We're gonna charge you 25 bps. But the reality is that the service offered by somebody who's charging 2% is different than what they offer at 25 bps, and the 25 bps solution was fairly easy for Vanguard to build. And Vanguard became by far the largest robo advisor in the world.
Speaker 1:Got it.
Speaker 8:But I'm not I'm not afraid of of that in general. Mhmm. Infantech too much.
Speaker 2:Do you think that Do you think humanoid robots present an opportunity for loan sharking as
Speaker 1:a service? Know, like Come break your legs autonomously. Break your
Speaker 2:kneecaps autonomously. Break your kneecaps. It's like, hey. We're gonna offer you this great rate. Yeah.
Speaker 2:Whatever. It's secured against your kneecaps. It's security. It
Speaker 8:it's funny. Like, we our our tagline for our fund when we started, it was everything is fintech. It it would be funny if we invested in a humanoid robot company and then just we're, like, deadpan. Like, what do you mean? Like, obviously, the use case is for loan sharks.
Speaker 2:Yeah. Yeah. Obviously. I
Speaker 1:mean, I was talking to a
Speaker 2:sales guy. I'm curious. Like, my thought goes with the the do you think that AI can can get is already or could get materially better at underwriting than than a human just spending, you know, months on an opportunity? And is that something that do you think that fintech broadly has fatigue around investing in like AI lending just because it's been sort of this like ongoing narrative? Isn't there like what what's the what's the public company that was sort of promising this for a while?
Speaker 2:Something
Speaker 1:I mean, was Metromile, which was better underwriting I'm sorry. Start. But even Metromile is better underwriting for your car insurance based on how you drive. They put a GPS tracker and like a a gyroscope in there, basically, see if you're stomping on the brakes every two seconds, give you a higher insurance premium. A lot of promise there, but not a lot of, you know, massive adoption over time.
Speaker 8:I think they didn't execute that well. And I think with MetroMile, the there's actually Root has done a better job of it, but MetroMile, it was primarily just mileage based, the number of miles you drive, and then Root gives you a phone. You put your your phone in it, like, checks if you're braking hard and and where you're driving and stuff like that. So I I think that there are opportunities to be used in insurance. In underwriting for loans in particular, it can be tricky because your the regulatory framework in The United States, Equal Credit Opportunity Act, Fair Credit Reporting Act, etcetera, they you're not allowed to discriminate on the basis of race and gender, and some of these other things are tied to that.
Speaker 8:So that that can become tricky. And Mhmm. You're actually have to respond you have to give people the reason why they were denied. Mhmm. Like, it's adverse action notice.
Speaker 8:So you can't have a black box model that's like, here's all the data.
Speaker 1:They just give you a massive matrix of weights, and they're like, this is why you're denied. Figure it out. Yeah. Totally.
Speaker 8:So that
Speaker 1:Tensor number 76 was activating for you, so get out of here.
Speaker 8:Yeah. So, actually, you know, if we didn't have that, lending would be probably more efficient.
Speaker 1:Mhmm.
Speaker 8:And, like, you'd better be able to target the right customer, but we do have those things for a reason. And so we can't have a black box model. And so so there are actually companies we invested in a company that's, like, in part detecting bias in Mhmm. In AI underwriting for this purpose to make sure that you're compliant.
Speaker 1:Yep.
Speaker 8:And yeah. So, anyway, I I I think it can be used and can be super useful. Mhmm. But because of those regulatory bound, like, guidelines, I'm not sure it's gonna, like, be a step change in underwriting.
Speaker 1:Yeah. Makes a lot of sense. Jordy, last question. You wanna let Sheila get out
Speaker 2:of here? No. This is great. Always a pleasure. Always fun.
Speaker 2:Super fun, guys. Looking forward to the next one.
Speaker 8:Yeah. This is great. Likewise.
Speaker 2:Have a great rest of your day. Godspeed.
Speaker 1:We'll talk to you soon. Well, we got some breaking news, another massive funding announcement coming in to the studio. Victor from Craya. Is that how you pronounce it?
Speaker 3:Craya. Craya is
Speaker 1:coming in. AI video company that just announced a massive fundraise. Let me let me look up if I can find the the details of this fundraise so I can get everyone up to speed before we bring Victor in here. The website is crea.aikrea, and they just announced a huge funding round. $83,000,000, just a couple million over, over, Jay.
Speaker 1:Not that it's a competition. They got Andrews and Horowitz Bain Capital in the in the round. The past fourteen months at Craya have been hectic. We rolled out over 50 major product updates, grew to over 20,000,000 users organically, and they 20 x their revenue, all with a team of eight working out of a living room in San Francisco. That is fantastic.
Speaker 1:You love to see growth like that. It doesn't happen every day, but it's happening more and more in AI. So excited to bring Victor into the studio and talk about that. They write, the numbers are exciting, but they can miss something crucial, the team behind it all. Kreia is the work of a small talented group of imaginative, incredibly dedicated people.
Speaker 1:And, yes, most of us still live together. That's fun. Until now, we've never shared metrics or announced our funding. Heck, we didn't even have a blog until a few hours ago. Those details always felt secondary compared to what truly matters to us, making AI intuitive and controllable for creatives.
Speaker 1:Now after the release of our redesign, the growth of our team, and recent funding, it feels like the perfect time to open up about what inspires us and what we're building towards. So they write, we're living through a moment where everyone talks about automation, APIs, and how AI and software are eating the world perhaps too much. Don't get us wrong. While AI is powerful, transformative, and is going to radically change creative work, creatives aren't going anywhere. Forty thousand years ago, we painted red ochre onto cave walls.
Speaker 1:Later, we drew we drew with graphite on paper. Today, it's complicated. We use cameras to digitize light through glass lenses and silicon sensors, transferring data through metal wires to illuminate the LEDs lighting up the screen you're reading on now. How do you know I didn't print this out? I could've printed this.
Speaker 1:The printer still works at TBPN. We might have shifted to laptops, but you never know.
Speaker 2:Too much pushback.
Speaker 1:Don't assume anything. No. I am reading this on the screen.
Speaker 2:Paper usage.
Speaker 3:And
Speaker 2:old tools workflows in the waiting room
Speaker 1:yet, by way? Not yet. Not yet. Old tools and workflows will disappear, but our creative itch won't, and I agree with that. Excited to dig into that with him.
Speaker 1:We will build new and more powerful tools to keep doing what we've always done, master new mediums for self expression and storytelling. AI will render some tools obsolete, but not the people behind them. We see AI as a new medium that it let that lets us express ourselves through any format, text, video, sound, and even three d. Such a medium needs better, smarter, and more controllable tools. That's where Craya comes into play.
Speaker 1:They say it won't replace AI will not replace creativity. Creativity is not disappearing, but the walls between creative mediums are. Traditionally, excelling in one creative medium rarely translated smoothly into another. AI changes that, and we're bringing Victor into the studio to tell us more about Kreia and the fundraising. So welcome to the stream, Victor.
Speaker 1:You here? Yo. How you doing?
Speaker 2:What's going on?
Speaker 3:Yeah. I'm here.
Speaker 1:Great. Can you give us some brief
Speaker 3:Great to meet you guys.
Speaker 2:Fantastic. Is the is the office gonna change with this new fundraise? I gotta ask. Are you guys gonna stay posted in the living room?
Speaker 1:Do we have you, Victor? I think we might have lost you.
Speaker 3:Seems like we're seems like we're having some technical issues.
Speaker 1:Okay. Well, we we can hear see and hear you now.
Speaker 3:Yeah. Okay. Let me go down
Speaker 1:Far away from the Wi Fi router. I mean, that is the issue with working at home. Complicated Wi Fi. You need the enterprise solution soon now that you have the I know. Series b done.
Speaker 3:So very soon.
Speaker 1:Yeah.
Speaker 3:Oh, shit. They are doing another meeting, so I'm gonna steal Diego's room.
Speaker 1:This is great.
Speaker 2:I love getting whole tour.
Speaker 3:Love getting the You guys you guys wanna see the office?
Speaker 1:Yeah. Yeah. Yeah.
Speaker 2:Let's do a tour.
Speaker 1:Anything that you can show us. You can turn around. Oh, okay. There we go.
Speaker 2:No API keys hopefully.
Speaker 3:Music and music going on. That's fantastic. I'm real life.
Speaker 1:Yeah. Wow. Hey, How you doing?
Speaker 2:Congratulations on the milestone.
Speaker 1:Looking great.
Speaker 2:Wow. You guys said you guys were not kidding about the living room, you've really built it out. I love it.
Speaker 3:That's amazing. It's looking it's looking good.
Speaker 1:That's good.
Speaker 2:You're good. Nice. Why don't you introduce yourself? John John was gonna ask you that, then I cut him off.
Speaker 1:No. You're all good.
Speaker 3:Yeah. Sorry sorry about that.
Speaker 2:No worries.
Speaker 1:Boom. Here we go.
Speaker 3:So my my background my background yeah. Like, guess on my background, like, TLDR is I growing up, I was very interested into creative things of all kinds. I mainly had a a music band, and I was doing from playing multiple instruments in the little studio that I created in my house to producing music, mixing, mastering, like, learning about all of these processes around music production. But through my music band, I also got super interested on doing photography and, like, doing different kinds of content for that music band. So that way, I I explore, like, many different things from graphic design, three d, graffiti art.
Speaker 3:I I also have, like, a big passion for that. And and at some point, I was that was in 2015. I was in I was just, like, fini I just finished high school, and I didn't I was not sure about what to do after that. And I had, like, two options in in front of me. One one of it was go and do classical guitar studies at the conservatory of Barcelona at the conservatory of Guitar of Barcelona.
Speaker 3:And the other one was doing something around computer science or physics. I really like math. And I guess that what I like about math is is kind of, like, the challenges that it poses and, like, the interesting yeah. Like, I guess, I like I I I love challenges in math, put, like, a a lot of challenges in front of me. But in the end, found, like, this middle ground on this degree that it was called audio visual systems engineering.
Speaker 3:Mhmm. It was kind of like this degree where they showed you how a microphone works, how m p three encodes audio, how Mhmm. M p four encodes video, etcetera. And that's where I met Diego, my cofounder. That was, like, ten ten ten years ago.
Speaker 3:He ended up in that same degree following kind of a similar story. In his case, he came from having a lot of interest in film and a lot of interest in in three d as well, but also he also loved programming, and he also loved engineering. So we both ended up, like, in in that degree. And on the second or third year, I got introduced into I mean, first of all, I loved coding. Mhmm.
Speaker 3:Like, right right after getting into the degree, I loved coding. Found me, like, extremely creative. Later on, I found about AI. I was mind blown by deep learning. Like, just like the fact that you can have these neural networks learning by themselves from data and being able to do such complex complex tasks was very interesting to me.
Speaker 3:And when I discovered about GANs, that they were, like, very early models for email generation, that's when I fell super deep into the rabbit hole. And I ended up, like, reading a lot of papers, doing a ton of implementations by my own from all these papers that they were out there back when everything was open source. And and ended up
Speaker 2:The good old days. The good old days. Open source days.
Speaker 1:Do you have a first question? Or
Speaker 2:No. Go for
Speaker 1:it. I I I 20,000,000 users, absolutely massive. Congratulations. Where are you seeing those folks come from? Is it consumers just having fun, prosumers who are maybe doing little contracting work, monetizing their creativity on social networks, or are you already in the enterprise or all three?
Speaker 3:All three? I think that, up until recently, there were, like, two very well defined blogs of users. One of them, it was the consumer type. It was people who this technology gave them a zero to one when it comes to creative freedom or to, like, enabling them to create. It's people that didn't necessarily come from a creative background, but they had a lot of joy out of expressing their creative ideas using this technology.
Speaker 3:And they were they were paying for the subscription almost in the same way that you could pay for a video game or that you could pay for a camera. Mhmm. Then we had the professional, and and the professional was that user that did have a creative background that he was using our technology when mean, our platform to speed up some of their processes. Mhmm. These are speed ups, like, vary depending on the on the industry.
Speaker 3:Like, you you would see architecture studios coming to Korea with very low resolution renders and using our enhancer to get these renders up to four k resolutions with very crisp textures, or you would see game designers coming to our real time tool, putting a bunch of ideas around characters and being able to create prototypes for some characters that they were designing.
Speaker 2:So Can you talk about just general adoption? So during the during the sort of like Studio Ghibli moment, it's still, you know, top of mind. We saw a lot of people that still weren't aware. They had no idea how these images were being created. John and I think that some people thought it was like Snapchat filters or something like that.
Speaker 2:Can you talk about just like consumer sort of awareness and adoption broadly? You know, are are are you guys still finding people every single day that are just sort of, like, completely new to this sort of new image generation models? Or or Yeah. You know, how what what do you think the broad consumer awareness is today?
Speaker 3:I mean, I was I I just came two days ago from a a short trip to New York, and I feel like that that trip to New York made me realize how deep in the bubble we are here in SF. Like, I think that I take for granted that people know that nowadays you can generate images with artificial intelligence, and that's not the case. Like, I think that we haven't we haven't reached I wouldn't even I wouldn't even know what's, like, the percentage of of, like, reach that we have had right now, but it's definitely very, very small. Like, this technology is still nascent. People like us are trying to make it intuitive and usable for really anybody to just, like, grab a phone type by URL and be able to create an image, very easy.
Speaker 3:But I think that people still need to know that this is even a possibility. Like, I think that that they they just don't even think that some of the problems that they have when it comes to marketing or when it comes to doing product design can be solved today by artificial intelligence. So I don't know if I'm the best one like, if if I'm the best person to have, like, a good sense of what is a current adoption because of how deep we are in the asset bubble.
Speaker 1:Yeah.
Speaker 3:From my experience that we that I've had in New York I don't know. Like, I have this fun story that I was on an Uber, and the Uber, like, she she just, like, saw that I was, like, talking on the phone in Spanish, and she was from, Puerto Rico. So she started, like, talking with me and asking asking me what what I was from and what I was doing. And this woman, she had, like, a like, she was, like, selling a sort of, like, beauty products on Instagram.
Speaker 1:Mhmm.
Speaker 5:And and
Speaker 3:I saw it, and and she started asking me, oh, so can I use your tool for doing, like, this product photography, or can I use, like, all of these things? And as she was talking, I was like, yes. You can do it, but you need to go through a process. It's not, some magic thing that you, like, go there and and, like, the AI does everything for you. You need to go and train a model with your product.
Speaker 3:After the model is trained, you go to the image generator. There, you create, like, all the assets that you want. And after you have this workflow in mind, after you have, like, this workflow in place, you can generate as many assets as you want and, like, your workflow is gonna be extremely optimized. Yeah.
Speaker 2:How do you how do you think about prompt engineering long term? Is it is it, you know I I remember, like, a year and a half ago, maybe a year ago, everybody said every company, like, prompt engineer is gonna be this new role. And now it feels like it's getting easy enough to prompt a lot of these tools. I'm sure like Korea that maybe it's just not necessarily maybe it's a skill set, but not necessarily a job, but I'm curious how you think do you think that prompt prompt engineering will still matter in five or ten years, or it'll just be super intuitive?
Speaker 3:I mean, pro from engineering at the end of the day, it's it's it's it's it's just like being able to communicate your ideas in a clear way with, like, this technology. You know? Like, we have, like, this AI model that can understand language and that can do things. And from engineering, it's just like the way that you tell this knowledge that we have encapsulated how to do things or what exactly to do. So at the end of the day, it's just managing, and and I do think that this feels like a new way of doing software.
Speaker 3:And I do feel like this is gonna like, in the future, most most software that we see out there has been created by a very big percentage through prompt engineering, through steering AI models towards whatever you want to accomplish. And I see this on the on the visual space. Like, I see I see us building Crea in the future more and more through instructions. I see our users working with our platform more and more through instructions rather than through just, like
Speaker 1:Yeah.
Speaker 3:Like, type type typing typing a pro and and just, like, getting an image. I think that the this new model from OpenAI kinda shows that.
Speaker 1:Yes. Speaking of the new OpenAI model, it seems like they've evolved the actual underlying algorithm. It's not purely diffusion based. Are there new buzzwords or keywords that, have you reverse engineered any of how they're doing that? Because it seems like there's a number of steps.
Speaker 1:Like, they're actually transforming the prompt. There's some reasoning in there. The image loads top to bottom, which we haven't seen before. Midjourney kinda diffuses everything from blurry to crisp, just the whole image at a time. It seems like if they're doing some sort of blocks or line by line rendering.
Speaker 1:What can you tell us about how that system actually works?
Speaker 3:I don't have a I mean, I have some intuitions, but I I feel like super I don't have high confidence
Speaker 1:Mhmm.
Speaker 3:On how it works. It seems like there's some auto regressiveness going on. Yeah. And we have already seen similar things with Croc image generation. Mhmm.
Speaker 3:But I feel like to me, what it's really game changing about this new image model is the it's like, very similar to what we were, like, talking about before. Like, this if this is an image model that is able to reason, and it's able to understand instructions
Speaker 1:Mhmm.
Speaker 3:Like, it's able to understand, here's, like, the picture of my dog, turn it into Studio Ghibli. Yeah. And and, like, this this is, like, something new. This is, like, something that previous diffusion models were not good at. Like, the different models are good at you have, like, a text, and you can generate an image that kind of represents that text.
Speaker 3:But it's very hard to have them reason and to have think about, like, what you want to do and what is the instruction, that the user wants and how to accomplish that goal.
Speaker 1:Yeah. Yeah. It seemed like there was, like, how it was, like, everything in style transfer should have been plus the the latest and greatest in diffusion models. Like, they really, like, packaged that up very well, and so I think that's why it broke through. But, anyway, congratulations on the round.
Speaker 1:Thanks for, thanks so much for stopping by. Yeah. And thanks for the office tour unexpectedly. That was that was really fun. But we'll let you get back to work.
Speaker 2:I'm sure so
Speaker 1:much to do.
Speaker 2:To the whole team.
Speaker 1:And and we'll talk to you soon.
Speaker 3:Sounds great.
Speaker 2:Thank you
Speaker 3:so much for having me.
Speaker 1:Thanks lot. Talk to you soon. See you. Bye. Very interesting.
Speaker 2:Nice. We got Leaf coming on from public. I'm curious if he's been sleeping at all
Speaker 1:in the last few time in the market. He has some interesting data on what's happening on public because that's where people go to trade multi asset investing, industry leading yields. They're trusted by millions, folks. You've heard us do the ad reads before, but now we have Leaf in the studio breaking it down for us, and we will bring him in right now. How are doing, Leaf?
Speaker 1:Welcome to the stream.
Speaker 2:Boom. What's going on? Great to finally have you.
Speaker 10:Nice, Huxan. What's going on? What's going on?
Speaker 2:I was just
Speaker 10:saying I had my caffeine already, so that's why I'm like
Speaker 2:Good.
Speaker 1:Fantastic. Good.
Speaker 2:Good. Somebody was commenting yesterday about our caffeine consumption.
Speaker 1:Oh, yeah.
Speaker 2:It's easily easily five hundred milligrams plus at this point. John, that's five hundred milligrams during the show Yep. Oftentimes, but he's, like, built like a horse.
Speaker 1:So he
Speaker 2:can he can take it. How how are you doing? We were just joking about whether or not you had slept at all the last week. I know it's been busy. You know, I'm sure it's been a busy time for you and the and the whole team.
Speaker 10:It's been busy. But our systems have been up at least compared to other folks.
Speaker 1:That's great.
Speaker 2:Yeah, that's good. Walk us through some of, I mean, I'm mostly curious to hear you had shared on X yesterday about how there have been more buyers than sellers over at least in certain moments over the last week. Maybe break down that data point and and Yeah. And then we can talk about some other stuff that's top of mind.
Speaker 10:Yep. I mean, generally speaking, just so, like, a mini step back is, like, this generation of investors, like, call it especially millennials, like, man, they have been through market cycles like crazy in the past five years. Right?
Speaker 1:Oh, yeah.
Speaker 10:Or even just like through their lifetime. I I even saw some of saw like some meme on on X the other day of like millennials experiencing their fourth once in a lifetime opportunity of a drop in the market, you know?
Speaker 1:Yep. Yep.
Speaker 10:And so on. And I think especially, like, the March 2020 drop where, like, circus breakers hit and so on That's crazy. Is still in people's minds. Mhmm. And I think that specifically, because you saw a lot of, individual investors actually also making money on that, and I think that has stuck with a lot of people.
Speaker 10:So generally speaking, this behavior of buying the dip is a little bit retail investing culture now. And so whenever you see these, like, massive drops, this is really when we see some of our best days. Yeah. My my reaction Yesterday was, like, you know Yeah. One of our record days in just deposits, for example.
Speaker 10:And yeah. But yeah.
Speaker 1:Yeah. Well, my reaction, you know, the stock market was down 5% back to back days. And I was just like, what what is everyone complaining about? We're not even hitting circuit breakers. Like, this is not that crazy to me because I remember 2020, and it was way crazier.
Speaker 1:But, of course, like
Speaker 2:Can you can you actually can you actually break down the mechanics of what it, like, the like, what exactly is happening and and what what truly triggers the the circuit break?
Speaker 1:Yeah.
Speaker 10:It's just I don't have the exact numbers in front of me, but it's essentially just if it drops too quickly to, you know, specific thresholds, I believe it's 7% and then
Speaker 5:Yeah.
Speaker 10:10 or 15 or so, essentially, they pause the markets. And that didn't used to exist in the past. Right? Yep. And so it's essentially, like, a little bit of, like, a safety trigger, like, a speed bump to, you know, make sure that investors can take a breather Yeah.
Speaker 10:When these markets start to drop too quickly.
Speaker 1:Yeah. In Japan, don't they have lunch break
Speaker 2:in the middle of the
Speaker 10:They do. Not just in Japan. Other countries too. Yeah.
Speaker 1:Other countries too. I love that. But, yeah, it makes sense. I mean, we I a couple years ago when the algorithmic trading got really popular, there was, like, the flash crash. Mhmm.
Speaker 1:I think the market traded down, like, 20% in, like, two seconds and then went back up. And, yeah, obviously, you wanna avoid that. What about overnight trading? You always hear, oh, you're watching the futures market, and it seems like somebody has an edge here that they can trade before the market opens. How wait.
Speaker 1:Is $24.07 trading coming to America? We've heard some rumors. Is there a way to get in on that action?
Speaker 10:JR's being I think it's definitely coming. Like, it will also come to public at some point. Yeah. Right now, we essentially have, you know, 4AM to 8PM. But the thing that you have to think of is that each trading window
Speaker 1:Mhmm.
Speaker 10:Has its own participants and its own liquidity and execution venues. Mhmm. And so think of it as, like, there's the regular opening between your 09:30 and 4PM. That's the most liquidity. It's when, like, most people participate.
Speaker 10:Mhmm. You could call that the healthiest time in the market in theory. Then you have essentially premarket and postmarket, which is, you know, 4AM to 09:30 and, you know, 04:30 to 8PM. And then you have overnight, which is, like, the new thing. Overnight right now, there's essentially only, like, one major player who drives the liquidity for that.
Speaker 8:Mhmm.
Speaker 10:And what happens there is that because you have only one player, you have a lot of like, or, like, you have you have not asked many platforms participating yet. And so you can have these moments where there's a lot of kinda unilateral flow happening, And that's why in the in the in the overnight markets, you often see certain stocks just suddenly rally also. And that is a little bit I don't wanna call it fake, but it's a little bit like it has these wild swings because the the types of people that trade in those times of the markets, and it's a concentrated liquidity pool. And so, you know, these swings just happen, you know, way more dramatically. And so you often have these moments where, like, in overnight, a stock goes up and you see on Twitter, everyone's like I'm sorry.
Speaker 10:On x, everyone, like, posting the screenshots of, oh my god. You know, Palantir is going nuts right now over the company money. And then suddenly, like, 09:30 market opens and it goes, goes down and everything kinda normalizes again. Right? And that is really just because each market window has their own participants and their own pools of liquidity.
Speaker 10:And so you kinda have to take a little bit with a grain of salt. Like Mhmm. Can you play that? Maybe, but there's obviously some risk there as well.
Speaker 1:Can you talk a little bit about information diets? And, really, I wanna know what events that are predictable, not like Trump imposing massive tariffs all of a sudden on Liberation Day, but what can we count on like clockwork every single day to be the highest volume day of the year? Is there a Super Bowl of of stock trading that happens, whether it's earnings day or jobs day, what what are the big reliable sources of high liquidity in the market?
Speaker 10:I don't know if I have a good answer there. Mhmm. My my gut reaction would be just from internal measures like Monday mornings because you have all you a lot of cute orders from the weekend and stuff like that.
Speaker 1:Yeah. It makes sense.
Speaker 10:You could execute it at the open because not everyone will trade pre market and stuff because spreads are wider and all these things. But specific days, I'm really sure, to be honest.
Speaker 1:Maybe, like, big tech earnings too is kind of like a season for that.
Speaker 10:But generally speaking, it's like, it's always like, it's often just driven by market events. Right? At the of the day, people will trade when they see opportunity.
Speaker 1:Yep.
Speaker 10:And if you can predict that, let's start a hedge fund together tomorrow. But other than that but other than that, you know, it's it's like, it is driven by these by these moments. Right? And whenever like, we have good days when the markets are in the news no matter which direction. But as long as the markets are in the news, we have good days because people get inspired one way or the other.
Speaker 1:Yep.
Speaker 10:And, you know, so so in a way, trading volumes for companies like ours are a little bit like competing with any other thing that competes with attention because, you know, if markets are the news, you get inspired by something, and that might drive action, and that's what we see.
Speaker 2:Yeah. What do you see from a demographic standpoint? I'm curious. The public obviously offers access to bonds, which, you know, it was probably good to be in bonds, you know, if you sold last week before Liberation Day. But but do you see a lot of demographic differences, you know, sort of like Gen Zs basically like bonds for me or just like, GameStop.
Speaker 2:Right? Like, always know it's gonna be worth something.
Speaker 1:Yeah. It's a store of value.
Speaker 2:It's a store of value. Right? But I'm curious if you see sort of pretty specific activity across different demographics in terms of interest in these different types of assets?
Speaker 10:Yeah. I mean, straight up bonds always skew older. Just from a perspective, the older you get, the more you're, like, thinking of preservation versus growth. Mhmm. But then I think what's interesting now is that so we've launched multiple yield accounts essentially.
Speaker 10:So you have your high yield cash, which is, you know, similar to a savings account. You just get your yield, and it's variable based on interest rates and such. Then you have, like, your bond account, which is essentially, like, a basket of corporate bonds underlying, and then you have, like, your treasury account, which is, like, US government treasuries. And those kinda simplify the investments into bonds because you just deposit money, earn yield. It's, like, very simplified and just, like you'd have to, like, pick certain bonds and stuff like that.
Speaker 10:So what we've seen is with that is that a lot of people are using those to just put money into the markets waiting for these moments of of, like, of opportunity. Right? So what we've seen in the last few trading days, essentially, is that people cycle out of the the yield accounts and into stocks and ETFs because they essentially had this cash lying around, and we're like, okay. You know? I think totally what we've been hearing a lot is that, like, hey.
Speaker 10:After Trump was elected and the market started ripping and there were, you know, all time highs and a bunch of things going on all the time, that there was also a bunch of, you know, individual investors who were essentially feeling like, oh, I might just buy in the top right now. And so they put it into these yield accounts. But then also the minute you saw things drop the way they've done now, they've basically cycled it out of the yield accounts into stocks and ETFs specifically. So so so so so you're seeing younger generations thing using it less as a, hey. I'm gonna now actually hold the bond to maturity ten years from now and more use like these account types that we've created for, you know, just like earn some yield on your stuff, you know, until you you actually see other opportunities.
Speaker 2:Yeah. Do you think AI is already helping retail investors better understand the companies that they're investing in? Every public company is putting out a huge amount of information. Unless you're becoming just overly obsessed with this specific stock. It's hard to figure out what's what's important, what should I be looking at.
Speaker 2:What have you guys seen? I know I know you've like, you know, released products to help people leverage all of that data with AI, but I'm curious what you're seeing.
Speaker 10:That's a great layout, Johnny, by the way.
Speaker 1:Thank you. A real song.
Speaker 10:Good dream is my pitch, you know. Thanks for sponsor.
Speaker 2:Yeah. But,
Speaker 10:yeah. %. And, like, we obviously launched Alpha, which started off by just you can swipe down on any stock, ask any question about the stock. Mhmm. And, you know, and that just created this, like, bite sized researching for for things.
Speaker 10:And we've kinda fed the model with a bunch of data that we already had from years ago. Like, for example, we acquired a company, like, three, four years ago that was essentially a tool that turned all the SEC filings that had, you know, custom company KPIs of, like, subscriber numbers and, you know, how many cars has Tesla shipped and things like that into more structured data. And then we then we use that structured data to kinda train our models and such to make that very easily accessible. Now what now has happened is that it's much more proactive than, you know, than just you kinda have to pull information. And so the obvious one is what we call why is it moving, which essentially if a stock is going in either direction very heavily, we kinda pop this card on that page and tell people, you know, the like, why this thing is likely moving right now.
Speaker 10:And then you can tap on that. That brings you into a conversation with alpha, gives you more, you know, granular breakdown on that. And so it's much more the the pulling versus the like, much more like the the pushing versus the pulling. And I think that's also just generally where where it's where where it's going. But it's but, like, what's awesome to see is that these, like, these, like, bite sized contextual moments where AI can be super fast with just really great at summarization
Speaker 6:Yeah.
Speaker 10:And can also go against biases. Right? So, like, if we go for new stories, for example, we sort of say, like, QA multiple sources. So we're not just coming from one source and pop it to you, but we QA multiple sources, and then the summarization comes from the multiple sources.
Speaker 3:Mhmm.
Speaker 10:And so there is a little bit of, like, QA built in and a little bit of, like, taking the bias out that maybe one writer will have or something. Right?
Speaker 2:Yeah. Speaking of
Speaker 10:Which can be helpful.
Speaker 2:Speaking of data quality, what what do you think you know, yesterday, Walter Bloomberg shared shared some news that that wasn't quite accurate and moved the market, you know, trillions of dollars. Maybe helped us avoid whatever it was, Black Black Monday
Speaker 1:Black Monday.
Speaker 2:Kramer was calling it. What what do you think is do you think that do you think that there's any like solution to that or it's just the nature of the Internet where now you have these accounts that basically act? They publish they're basically like mainstream media except they just publish They're not doing any journalism. They're not even looking at data. They're just sort of like trying to be the first or second or third big account that's like sharing a headline.
Speaker 2:Is there any is how what's the fix there? Right? Is there one or is that just the nature of the Internet where this sort of information breaks and then markets are gonna react really quickly? And now retail investors are so ready to act on information. You know, a good example is like if you just happen to be on x when Trump posted Trump coin and bought $20 of it, you became, you know, a millionaire within within a few hours.
Speaker 2:But I'm curious like if you've thought at all about like how yeah. Just just I I don't know if there's a I don't know if there is a solution. Right?
Speaker 10:It's just Yeah. But I think it's I always come back to who are you building for, and therefore, what behavior is your product inspiring? Generally speaking, the way you design your product will always have an impact on how people use it and their behavior. And in our case, you know, yeah, you cannot you can buy a Bitcoin. You can do options trades.
Speaker 10:But generally speaking, the way we've designed the product and the offerings that we have are more focused around building long term portfolios for people that wanna, you know, compound their wealth over time, you know, the all the fixed yield the the fixed income offerings that we now have, etcetera. And I think there are just certain design decisions that that impact in the end the behavior of, like like, of these users. Right? And Yeah. Therefore, in the end, I think that is much more important for for people to make healthy investment decisions than necessarily, you know, like like, how the consumer and so on.
Speaker 10:But, like like, that behavior you're you're you're talking about is obviously not necessarily coming from the potential wrong information from some x account. That behavior is more cultural or how they were trained when they started being in the markets. You know? And so I think that is that is much more the sense of, like, if the platform you're using is closer to gambling, you will end up being more of a gambler automatically just based on the design of how you were introduced to the markets. And, therefore, you'll be more prone to, you know, potentially react on these types of things because your investing style will be closer to a gambler than maybe someone who tries to compound their wealth over time and, you know, cycling money out of a Hikr account into, you know, Amazon stock or whatever because they see an opportunity that Yeah.
Speaker 10:Both to to then also hold it for long term
Speaker 2:and so on.
Speaker 10:So I think that's much more the the the issue, so to say, than than than those accounts.
Speaker 2:I have a bunch of other questions but I think we're we're over. We'll have to have you back on very I know you got some big stuff in the works. Thanks for coming on.
Speaker 1:Yeah. I'm excited for the announcements. This will be great. We'll talk to soon.
Speaker 2:You, man. Bye.
Speaker 1:And we have our last guest of the show announcing a $30,000,000 series b, the smallest round of the day. It's rough out there.
Speaker 2:What is this? A round for ants?
Speaker 1:I don't wanna talk to a crowd. I mean, a very it's it's great. It still gets a size gong hit. But it it it is funny. We've seen a bunch of huge rounds today.
Speaker 1:It's a good day in the markets. The markets the public markets are down, but the private markets are ripping. Let's bring in the founder and CEO of Arena AI. Today, he's announcing a $30,000,000 series b and and introducing Atlas, an AI hardware engineer that is used by many of the world's most respected and ambitious hardware companies. I'm excited to talk to him.
Speaker 1:Now if you're there, welcome to the studio. How you doing? Boom.
Speaker 2:What's going on?
Speaker 6:Going. Nice to see you guys.
Speaker 1:It's great. Would you mind with just starting with a little introduction on who you are, the company, and maybe a little bit of your background? Because I I thought the the the previous company was really interesting too, so I wanna hear about
Speaker 6:Thanks. No. That's awesome. Yeah. So, I'll tell you, like, a bit of a company.
Speaker 6:It's Atlas AI hardware engineer. The company's called Arena. We're based in New York. Yeah. Background, I started out as an applied physicist, so I spent, like, a decade when I thought I was doing physics dealing with hardware problems.
Speaker 6:Again, this was, like, a while back. You know, switch gears, went did a brief stint through consulting, so wore suit for a short flash of time over there. And then Bring it back. Yeah. So it started it started right during that financial crisis too, which was a wild time to be starting a job.
Speaker 6:Wow.
Speaker 1:That's insane.
Speaker 6:But then, then sort of Mistech moved out of San Francisco. The first company, to your point, was in 2014. It was called Kimono. Mhmm. So the idea was to make it really easy, I mean, to write a web scraper.
Speaker 6:Right? And it was, like, pretty popular. We had a we grew to a 50,000 users. We got bought by Palantir Mhmm. Which is where I met my cofounder.
Speaker 6:We were there for a while and then started Arena in 02/2019.
Speaker 1:Very cool. Can you take me through the founding of Arena? How'd you settle on this to build? It seems very on trend now, but you've been working on it for a couple years. What what what inspired you?
Speaker 1:What was kind of, like, the early go to market, the first customer that you were talking to? What does customer development look like? All that stuff.
Speaker 6:Yeah. Totally. So just, you know, we had a bit of a, an interesting path here, I would say. It's like a little bit nontraditional. We started we decided to bootstrap the company.
Speaker 1:Oh, wow.
Speaker 6:And so we said, well, you know, if you think about, like, our view on enterprise problems, if you think about the b to b problem space is there's almost like a Maslow's hierarchy, right, which is that if you let's say you've had a job for two to four years and you're like you've encountered a certain group of problems, like you've encountered payroll, onboarding, communications. But then if you've been there for a while, you're deep in that industry, you're almost you're seeing another set of problems. Right? And I think one of the things Palantir did so well is they were able to go so deep into a customer for so long that they encountered problems that for which there was very little competition for. And, like, previously, you had consulting companies kinda doing that.
Speaker 6:So there's there's, I think, a whole host of sort of untapped problems. And so our view is if we wanna tackle problems that are really deep in an industry that are really valuable, we need to go really deep with our customers. So that's been the philosophy since the the founding days. Instead of saying we're gonna sell to other startups and sell, bottom up, the view is to start with a very difficult to enter customer start start top down. And the the origin of the company was actually more, let's say, less vertically opinionated.
Speaker 6:We, you know, had added depth in reinforcement learning, and transformers were like, let's go and apply that for enterprise problems. So we were not as, sort of like you know, our our thesis hadn't formed as sharply as it had today. And then we saw traction in a few different markets. And and, like, post Chad GPT, we're like, look. I I don't think for a small company, playing in horizontal AI is really a winner's game.
Speaker 6:But we found that, like, there was this beautiful intersection that went back to sort of my days as a physicist where, like, two different technical fields together with, like, you know, applied physics, electrical engineering, and AI. Now you have an interesting customer set where you look at, like, a a hardware test lab. It actually hasn't changed in a long time. The incumbent competitor set are three companies from the eighties. You know?
Speaker 6:It's it's, weirdly, the the in underpinning layer of technology on which all of our software runs, weirdly hasn't changed that much. Developing the hardware has, like, stagnant. Yeah. And it's kind of surprising when you think about it.
Speaker 1:Yeah. Can you talk a little bit about, like, hardware engineering one zero one zero one? Are we writing Verilog? Are we in CAD? Are there other systems?
Speaker 1:Like like, what what does the work look like? And is this something where it's like, it's managed on GitHub, so Devon's gonna go off and write some code for you, and we're just doing fancy autocomplete that's probably I'm not trying to diss. It's it's incredibly valuable if that's what it is. But just concretize, like, what are we actually talking about here for the folks who haven't done hardware engineering?
Speaker 6:Totally. Right? So let's, like, let's break it down. Let's take a simple example. Let's take a like, let's take something like a drone.
Speaker 6:Right? Like, we've got, like, you've got, like, the mechanical shell where you've got, like, your mechanical engineering CAD model, stress strain modeling. Still a lot to do, but humans got pretty good at it. We've been building physical stuff for a while. Like, you know, we can screw things in, weld them together.
Speaker 6:Again, not trivializing that, ton of opportunity
Speaker 1:Yeah.
Speaker 6:But but that's kinda figured out. Now inside, especially as you think about systems that are starting to go autonomous or partly autonomous. Right? You're like, I mean, we've had helicopters. Now we have drones.
Speaker 6:Right? So so what's the change? The brains of this are basically like a set of embedded systems. Right? So embedded systems effectively, your computer, the green motherboard, like, you've got inside, except a whole bunch of them.
Speaker 6:Right? You've got, like, one that's operating as a sensor and multiple different types of sensors. So, like, an IMU for how you're oriented, like temperature sensors, optical sensors. So all of your sort of sensors just like the body has and then a brain. And and then and you might have multiple brains, might be onboard again in our drone example, flight computers.
Speaker 6:And then you have actions that you take, like servos and actuators. And if you think about this, like, inside that mechanical shell, you've got this almost electrical skeleton, sort of like your own nervous system sort of wired together. Right? And at that layer, you know, to your point, there are two things that are happening. One is all of the electrical connections to make that work.
Speaker 6:And then for certain of those chips, you're running code onboard. So to your point about the Devon so, you know, you might have an FPGA that's programmable. You're putting code onto it. And so that's sort of the the the sort of the the we're currently at that inner layer. We're currently at that nervous system because there's this huge need.
Speaker 6:And if you look at, like, the just the kinda labor markets for a second, this is actually weirdly not surprising but but but but kind of has profound implications. The last fifty years, if you look at computer science course enrollments, they're up by 90%. None of us are surprised by it. Yeah. But electrical engineering course enrollments are down by the same amount.
Speaker 6:Yeah. And so if you look at it, we've got this huge research we've got tariffs. We've got all that. We've got a huge resurgence in American manufacturing. Right?
Speaker 6:And now you have all these intelligent hardware companies, robot companies, space companies. And, like, you have this burning platform problem where it's like, oh my god. People haven't been studying this stuff. And like, we're trying to now ship at the velocity of a software company in a hardware state space where stuff can literally explode Yeah. Without a workforce.
Speaker 2:Right? Yeah.
Speaker 6:So so you
Speaker 2:Yeah. My my favorite my favorite example here is is you have Sonos which hasn't has made like beautiful devices but they haven't managed to like just get like even the soft like I know. Like the collective experience of using a Sonos product is just completely brutal. Right? And then and then and then you like look like this is a company with like hundreds thousands of employees.
Speaker 2:They're public And it's not even defense. Like it's not critical that my speaker work like when I want it to play music. Right? It's annoying but it's not the end of the world. And then it's like, hey, if that is hard in a controlled environment in a home Yeah.
Speaker 2:And then we need to do much harder things in the sort of these defense Yeah. Critical industries. Like, that's a that should be a red alert.
Speaker 6:Yeah. Yeah. It it it totally is a red alert. That's where the the we've got customers kinda screaming for it. And, you know, the at the root of it, like, you have this idea that, you know, it's software.
Speaker 6:I mean, you talked about Devon. Right? Like Uh-huh. I mean, it's never been easier to write code. Right?
Speaker 6:It is like a mean, already, weirdly, Python was an abstraction over, like, you know, c plus like, it's not as hard as c plus plus. It's gotten easier and easier. And now, like, you're speaking in English and, like, that's amazing. Right? It's like the Star Trek computer.
Speaker 6:Yeah. And it's a beautiful environment because code doesn't need to obey physics. It just needs to render in your browser. Right? Now suddenly you're making contract with nature, as we know.
Speaker 6:You guys go outside. You're like, nature's unforgiving, man. It's not like we haven't, fixed a bunch we don't have space elevators. We don't have jet packs. Like, no.
Speaker 6:We have, like, TikTok, which is great. But, like, what about all of that? And the problem is we're encountering this physics. And so each test cycle, to your point about the Sonos, is like, great. I have an idea.
Speaker 6:I'm gonna prototype it. Like, let me run it in my terminal. It doesn't compile it. Great. I probably made a stupid mistake.
Speaker 6:Everyone makes these. But the cost of making a mistake at that speed in hardware like, worst case scenario, you you get it wrong. Something explodes. But then even on the development cycle, each time you you're like, oh, damn. The board was wrong.
Speaker 6:I need to go and respin it. That's like you're adding three months to the cycle. And so these timelines and cost structures I mean, you know, we all know how, like, how much the f 35 program cost and and overran. It's like that explains it. Like, I mean, there's a lot more that explains it, but that's like a piece of it and an important it.
Speaker 1:Yeah. Can you talk a little bit about, where you see the most value to be delivered in the AI stack from it sounds like you're not doing pre training on a foundation model. Is found is fine tuning important? Is, you know, building a system on top of existing LLMs important? Are you doing reasoning, or is it more about UI and integration into existing systems?
Speaker 1:There's so many different ways to create value in the stack right now. I'm sure it can be kind of overwhelming, but how are you thinking about it?
Speaker 6:Totally. And, you know, it's a it's a cool question because our own thinking on this has evolved quite a bit. I would say we started with a view that was much more we kind of need to own all the pieces on the modeling side and solve the hard modeling problem. And we sort of realized, like, what's happening is base cognitive functions are just becoming available as an API. So, like Yeah.
Speaker 6:Vision is just gonna be available. We shouldn't work on a vision problem. Like, go fine tune like a, you YOLO or or or whatever VLM is your favorite. Yeah. You know, LLMs are maturing.
Speaker 6:But what we do find is if you think about, like, a person doing a work imagine, like, our objective is AI is we're trying to get as good as, like, a medium class person, let's say, or, a junior person even. Right? And we can unlock a fair value with that. If you wanna do that now people do yeah. If you think about you you talked about reasoning, and there's a sort of, like, notion for reasoning in LLM land.
Speaker 6:But, like, if we just think about human reasoning, there's, like, a nuanced kind of, like there there are a couple of things that are special. Right? There's some sort of, like especially in a formal environment like electrical engineering, there are certain rules of the world that we've learned over time that need to be true. It's like gravity is 9.8 meters per second square. You can't probabilistically learn that by watching stuff fall in air and being like, yeah, my ML no.
Speaker 6:No. That means that's just some like, speed of light. Like, you're gonna encounter that shit. It doesn't matter if you're an ML model or, like, you know, it's just real. Right?
Speaker 6:So there's some of these things with your hard constraints. And, you know, where AI has struggled is you tell an engineer something obviously wrong, they're never trusting you again. Like, you and they shouldn't, honestly. Like, you you wanna fly in a safe plane. You don't want that happening.
Speaker 6:So there's a piece here where it's like reasoning, but inside this sort of, like, structured constraint that, that apply that you sort of need, a, to win trust the user, but, b, to work. Right? Mhmm. The second piece is it's this kind of, multimodality where, again, I'm not saying we need to build from the ground up those models. Mhmm.
Speaker 6:But you need to make sure you're getting really clean input. Right? And so you you that that's input from and it's weird. It's not like you're taking text input. Text is, of course, a part of it.
Speaker 6:And I think the LLMs have gotten so good that it gives us an ability to, like, really ingest a ton of text documentation. For sure, that's a piece. But now you're also looking at the thing. You're looking at it visually. You're looking at thermally.
Speaker 6:Is it getting hot? You're looking at readings from an oscilloscope. So you're and and each of those things has meaning to an engineer, and the idea is, can you now tease the right meaning from that? And so a lot of our work is basically on that that that data and fine tuning side. How do we turn all of that into a package that can be fed in to a set of models?
Speaker 6:And the other thing we found is, you know, a person is doing multiple different pieces of work. A person might be saying, hey. I'm cross referencing in some data sheets. What should this FPGA be expected to Can this paint handle 10 volts at a hundred degrees Celsius, or is this thing am I gonna short out the most expensive part? So that's almost like a kind of a text based lookup.
Speaker 6:But then you're actually running a test. You're comparing waveforms. You're doing math. You're running simulations. And so what we found is we're using different systems to do the different, like, different as agents to different pieces of the specialized workflow.
Speaker 6:So you sort of have this, like, meta agent that you're talking to, and then you have these others that are sort of and the line blurs now between, like, what's an LM agent versus what what's it calling you know, we we all call tools. But those tools, if you'd talk to me in, like, November 2022 before ChatGPT, I would've been like, these are machine learning models and companies. Yeah. But they're just tools. Now it's like, oh, I need a thermal recognition stapled with, like, you know, the view from the waveform.
Speaker 6:That's an ML model. That doesn't need to be like a 600,000,000,000 parameter model, but it's a nontrivial thing to do. And so you could look at this entire constellation as being that sort of the product, if you would.
Speaker 1:Got it.
Speaker 2:One one quote that comes to mind is we were promised flying cars said we got a 40 characters. I I have to imagine that what you're building and and other tooling like it can has a potential you know, the exciting potential to me is sort of getting us out this period of stagnation, right? There's a lot of companies that are building, you know, there's companies building supersonic jets, right? And they can use your tool and then there's all these other things that we've yet to even imagine or we imagined in science fiction but now we should probably think about building. How optimistic are you around, you know, AI helping to accelerate and and help us sort of achieve these sort of like science fiction dreams that we've had forever but have never quite been reality?
Speaker 6:First of all, I love the quote. It definitely speaks to my heart. It's like, you know, if you if you look at and and and and it's it's an interesting question because it can feel sometimes like reading the news like doom and gloom. AI is taking our jobs. And it's like, you know, I'll I'll go back to an example that that I lived as a as a as a as a physics grad student.
Speaker 6:Right? I spent a lot of time and I supposedly came in to do physics. I was like, oh, I'm gonna do all this great quantum mechanics research. And I was basically, like, a mechanic and a plumber for, like, not like, 99% of the time. I was like, this thing is leaking.
Speaker 6:I think there's a water leak. Oh, this screw got bent. Oh, like like, I was not doing I was doing, like, less than 1% of physics. Right? And I was like a Yeah.
Speaker 6:You know, that that was the reality. And I think that's a reality for a lot of us. And so if you could take a lot of that away, like, I think what what it does is it changes, to your point, what human ambition should be. Like, what could you achieve if if let's say it takes ten years on average to build a startup. In the past, like, what we what we'd considered a SaaS company is not just gonna be a feature for, in the future.
Speaker 6:Right? And I think, yeah, original Silicon Valley was about Silicon. Right? It was actually hardware based. And I think we're gonna see a resurgence, hopefully, of that.
Speaker 6:Our hope is like, what gets me personally excited is one of the magical moments at my last company, Kimono, was you took someone who couldn't code, and you said, hey. Wow. With this tool, you could, like, write a web scraper. And it was like, you you we just got the most amazing, like, customer comments. And I was like, felt this joy of enabling people to do something.
Speaker 6:And, you know, hardware can be intimidating. You're in, like, a hardware lab even in college. You're like, oh, man. This is really complicated. It's really there's a high barrier.
Speaker 6:Does it need to be that high? You know? It's like we're seeing kids cheat on their essays with ChatGPT. That's a good thing. Yeah.
Speaker 6:We'll generate more stuff. What if we could let them cheat at EE Lab with this? Like, would we have more people going into hardware? Can we lower that barrier? Like Mhmm.
Speaker 6:What if you wanted to build a drone on the weekend? Like, you should be able to. Right? You should have Jarvis. I mean, the goal is to be Jarvis for and kind of enable everyone to be a little bit of a Tony star.
Speaker 1:I love it. I love it. In your announcement post, you highlighted five categories that it seems like you're going after in the first initial rollout. Semiconductors, aerospace, automotive, medical devices, and defense. Is that sorted by market size?
Speaker 1:Like, the burning need? Just do you like the way that it sounded in that order? But I I'm I am interested to hear which of those has the has the most immediate need or is the largest market.
Speaker 6:Yeah. It's a great question. So we we we started with semiconductors. Right? So that's sort of why we put it first because, you know, the most complex, especially when you got a lot of what we do is electrical today Sure.
Speaker 6:Electrical engineering problems with semi. And if we take the philosophy of we wanna be a little like Nike, start by selling it to the Olympic athletes, then get everyone to buy it, that was sort of the proving ground. And so we still have a few semiconductor companies, you know, that we're scaling up to, but I think that's like a you know, we all know the household names. It's a small set that are really valuable, but they establish the credibility. That actually helped us.
Speaker 6:We had a few great companies then come in inbound based on that. A lot of that was automotive and aerospace, and it's interesting. Actually, you think about, like, EVs and, like, autonomous driving and aerospace, you have a huge amount of that coming in. Medical devices, we've got, an early customer in there, and it's, like, that's that's going really well. You know, it's it's it's, there's a whole, like, FDA angle to this that we sort of need to need to work through or newer that, but it's it feels like the potential for impact is super high.
Speaker 6:And so that's sort of, like, a little bit, like, the landscape. I think we're seeing a ton of pull on the aerospace side. Like, especially if you look at that industry is getting we're gonna have more stuff that flies. And then you you introduce space to the mix. And, like, aerospace and defense, like, increasingly are kind of mixed.
Speaker 6:So you look at these these things, I think that's becoming kind of a unit in some way. And so it feels like there's a ton happening there right now. But, yeah, I mean, that's sort of just a little bit I I there was not, like, a whole bunch of science behind that ordering.
Speaker 1:Yeah. Makes sense. I wanna get your reaction to the tariffs. It seems like you're probably an American company selling to a lot of American companies. You know, regardless of what you think about the economics, it could potentially be a bull case for your company.
Speaker 1:How do you process the news, and and what are you thinking about, if it shifts your strategy at all over the next, you know, using the series b over the next couple of years?
Speaker 6:Yeah. So, you know, it's you know, we actually do have a couple of international customers too. And, like, you know, I mean, obviously, it's a huge impact. Right? I think the first thing we did was just, like, call them because we're like, are you okay?
Speaker 6:You know? Like, you especially look at these like, the the margins on something like a car have dramatically changed. Right? It's
Speaker 1:like Yeah.
Speaker 6:You're doing some of the work here. You're doing some of the work there. Some of it in America. It's like it's yeah. And there's a lot of American factories.
Speaker 6:What percentage of the car is actually getting made here is a totally different question. So you have a ton of, like, kind of a Yeah. Panic in the system. Right?
Speaker 3:Yeah.
Speaker 6:But, you know, you're right. Like, for us, it's been, like, it's accelerated customer pull and deployments. They're like, oh, no. Like, we can't go ahead and have, like, that gross margin impact and therefore have to do this with peep. We need we need technology.
Speaker 6:And so it's actually Mhmm. A forcing function like, if if if if part of this means that US sort of quality and speed and sort of ability to manufacture needs to get up really quickly, and I think this provided the economic incentive for it, and it's just not there, honestly. This is just an accelerator. It's more fuel. There's more urgency than there's ever been on the customer side.
Speaker 6:So, yeah, I would say, like, overall, like, lot of, like, chaos there. But, net, I think, like, you know, from our perspective, good because it means there's a huge pull and, like, this problem we talked about you know, we talked about the the 90% the the changes in the in in those those people. Like, we suddenly need to do a lot of this hard engineering in America. And and and it's it's it's this this finally put, like, a dollar amount on how important that problem is.
Speaker 1:Mhmm. How are you planning for kind of as AI technology gets better? It feels like we're firmly in the copilot era. There's a lot of talk about, oh, electrical engineering exam. I'm sure that these models have aced them at the highest level, and there's a prediction that, you know, AI will will earn an I m I IOI gold medal this year.
Speaker 1:And it's at, like, 50% on Polymarket. And yet I can't get AI agent to book my flight yet. So how are you planning for integrating that, taking advantage of what's state of the art and amazing and what AI does well? And then how are you building around the rough edges of the kind of innovation jagged edge?
Speaker 6:Yeah. You know, we we think about this a lot because, I feel like the question of defensibility probably is going to come up much sooner in a company's lifetime than it's ever done before. And so it's sort of like playing with fire, which is we wanna be on the glide path where our product automatically gets better as the titans go to war and the foundation models improve. Right? Like, we just wanna ride that wave.
Speaker 6:But if we're just that's what we're doing. We're like, let's get documents from the Internet and help you do cross reference. Okay. That's, like, not the the the that is going to disappear super fast. And so, you know, to to to your point, it's tied in with, like, your math Olympiad or physics Olympiad question, which is, yeah.
Speaker 6:You've got your your friend who's the genius, who's, like, really good at tests. Right? And then you've got your friend who I bet is a different person who is really good at building stuff. And usually, they're not the same person somehow. Like, in in my experience, they always do does your tinkerer friend who didn't somehow get the a.
Speaker 6:Right? It that tends to be the way. And so we're not trying to pass the math, Olympiad. We're trying to be the guy who's tinkering in the garage. Right?
Speaker 6:And so the tinkering in garage problem is very unsolved. Like, look at AI's capability there, it's like it's it's a disaster. But as the base cognition gets better
Speaker 2:But
Speaker 6:but you get better to
Speaker 5:do it.
Speaker 2:Yep. To be clear, like, when I think about the guy tinkering in the garage, my my dad was a was a high school teacher and he taught this prod class called project make. And it was all about it was some combination of like wood shop meets electrical engineering meets physics and, you know, you're making rockets and all that stuff. So when I think about him solving problems, it's like the tinkerer in the garage and that he would just try a lot of different things and experiment. And the beautiful thing about AI is like, I I I have memories as a kid of him working on one little problem, like, for five hours, like, a Sunday, like, trying to figure something out whether it's around the home or in class.
Speaker 2:And, like, he's basically running like a series of experiments. Right? And so the potential of AI is like, run every experiment at once, like, in a simulated environment, but, like, run like a thousand experiments in, like, you know, ten minutes. Right? And, like, when you start to think about what that can do for like accelerating progress, that's the most exciting thing for me because it's a superhuman.
Speaker 2:It's like the tinkerer in the garage but multiplied by million.
Speaker 6:Yeah. Totally. I mean, wanna get your dad signed up with our with the the sort of we're working on like an academic edition.
Speaker 5:Wanna give him
Speaker 6:free access to that, see if he could we're we're looking for feedback. But I think that's
Speaker 2:He's he's retired, but but I'll put you guys in touch.
Speaker 6:If he's still interested. If he's still got a garage.
Speaker 1:That'd be great. Well, I mean, thanks so
Speaker 2:much for Here's the bar for the team. Five years from now, I wanna be able to design our own podcast equipment.
Speaker 1:Oh, yeah.
Speaker 2:The the most cutting edge, you know, we need h one hundreds in these things. Yeah.
Speaker 1:And the microphone.
Speaker 2:I don't know why yet, but it sounds cool. So Deal. Congratulations. I I do have one last question. How much do you attribute your incredible energy levels to your, you know, being being a triathlete?
Speaker 2:Do think it gives you an edge as a as a as an operator?
Speaker 6:I would I I don't know. I would hope so. I don't I do a lot less triathlon than I would like, but I feel like it does training the pain threshold is a useful thing. I think it's just a useful thing in life.
Speaker 2:Yeah. That's awesome. Love
Speaker 1:it. Well, thanks so
Speaker 3:much for stopping by. Congrats. Congrats on the milestone.
Speaker 1:For the series b.
Speaker 6:For having me, guys.
Speaker 8:Appreciate it. We'll talk
Speaker 1:to you soon.
Speaker 2:See
Speaker 1:you. Have a good day. Let's go to the time line. Justin Ross is, quote tweeting, did you did you see the colossal company? They have brought back direwolves using ancient DNA with their firstborn on 10/01/2024.
Speaker 1:They waited a couple months to make sure that direwolf was healthy and growing over ten thousand years since dire wolves were
Speaker 2:extinct. Can we do a deep dive on this?
Speaker 1:I have done a full video and deep dive on the company, Colossal. Remix. Remix. I was emailing with the company awhile back. I'll I'll rekindle that connection.
Speaker 1:Hopefully, I've been the founder on the on the on the show. He has a a very funny collection of investors, but some really great, George Church from Harvard, fantastically renowned scientist is involved, and and they're and they're working on cool stuff. So this post was put in the truth zone. People said, hey. They're not technically direwolves.
Speaker 1:They didn't really revive them using ancient DNA. It's more, genetic modification of existing dogs. A little bit of controversial, but JD Ross chimes in and says, I don't care if these are real dire wolves or not. They're very cute, and we should mix in golden retriever DNA and use them to hunt deer with us. And I couldn't agree more.
Speaker 1:And you know where I would love to have a dire wolf hanging out and maybe go on some deer hunting? In a wander. I wanna find my happy place. Find place. Find your happy place.
Speaker 1:Buck a wander. Buck a wander with inspiring views, hotel graded, these dreamy beds, top tier cleaning, and twenty four seven concierge service. It's a vacation home but better. Code to
Speaker 2:VPN.
Speaker 1:And I wanna go to Mike Newpe, who we had on the show, founder of Zapier. He says, on the topic of AI is trained on all of humanity, why can't it innovate? A big question that we're talking about. That question of the test taker versus the the hacker. He says, new ideas come from two places.
Speaker 1:One, noticing similarities between two existing ideas, new ideas in one area translate into another, and two, logical construction, new ideas follow from prior axioms. One is easier and bounded, and two is harder and open ended. And Dorekesh was talking about this. Like, if you've trained a lot of a lot of scientific innovation just comes from somebody who's read so much about the scientific literature. They put together two random studies, and they find out that if you put those together, you get innovation.
Speaker 1:And then that's true in all sorts of different industries, but specifically in just if you've read all the papers, you've read all the books, you start making connections. This is what David Senra does a lot with his show, Founders Podcast. Go download Download today. But he says two is harder and open ended. Paradigm one can look a lot like career advice at the work to work at the intersection of two fields because it's easier to become an expert.
Speaker 1:In contrast, being an expert in a single domain requires much deeper hierarchical knowledge, and innovation requires novel in domain idea construction. This is the story of, you know, Elon Musk working in space and and electric cars, maybe having both of those knowledge sets multiplies in some way. Yeah. Two, he says it's harder because you don't know if innovation is blocked due to the prior axioms not existing yet or if you just haven't combined them in the right way. We want to build AGI that can innovate due to the fact that one is bounded search, leverages ML strengths like pattern recognition, and can bootstrap from human knowledge.
Speaker 1:I think we will create AGI that can reliably do number one well before number two. Very, very interesting take. And I I I just think it's like a interesting question that he's clearly asking. Like like, these AIs, they're blasting through all the benchmarks. They're doing all these amazing things, but we're not seeing innovation come out of them yet.
Speaker 1:Or even you could think about, like, the joke test is, like, you kind of need to be innovative to come up with a joke. Yeah. It needs to come from some it needs to be new and fresh. It's not just information retrieval.
Speaker 2:The idea of like creatively creativity is often just taking ideas from two different places Yeah. And combining them in some way.
Speaker 1:Yeah.
Speaker 2:And it feels like the models do that very well today and that you can ask it to make me a, you know, song. Yeah. Yeah. It's not doing it, you know, independently. It just like Yeah.
Speaker 2:You know, you have to sort of prompt it.
Speaker 1:This was the genius of Harry Potter Balenciaga. Like, the the human element there that made that actually go viral wasn't the AI. It was the idea that combining Harry Potter, kids story with Balenciaga, high fashion, that was funny. And then the AI just instantiated it. And I agree with you.
Speaker 1:Like, the idea of of taking two disparate concepts, putting them together is where you get genius. Like, take your best performing ad and put it on a billboard with adquick.com. Like, that's gonna perform better. That's right. And so go to adquick.com, out of home advertising made easy and measurable, say goodbye to Ad
Speaker 2:quick basically took the amazing attributes of online performance marketing and brought it into the real world.
Speaker 1:That's actually what they did. That's true.
Speaker 2:I'm serious.
Speaker 1:Yeah. That's whole point.
Speaker 2:We're not messing around. Yeah. We are
Speaker 1:You get a dashboard. You get all the different things that you expect when you're running a performance ad campaign online, on Facebook. It gives you similar dashboards but out in the real world, and they do a lot to help you track the performance of your out of home campaigns. I thought this was a funny one. We'll move on to, Quake two has been fully AI generated, and replaced by, Microsoft.
Speaker 1:You can play it in a browser. Every frame is created on the fly by an AI world model, so they trained it on on Quake two, had the had the the the algorithm or the AI play a ton, generate a ton of frames, and then just take the input from the controller, output of the frames. And so there's no there's no game engine. It's just input is what you're doing on the controller. Output is the game with visual fidelity, and you can see where this is going.
Speaker 1:It's crazy. It got a lot of hate.
Speaker 2:Yeah. Quake read this? Quake dad. Quake dad.
Speaker 1:Clearly a fan of Quake.
Speaker 2:Dedicated to the committed to the bit says,
Speaker 1:this is They haven't released a new Quake in like two decades. So this guy has been in the trenches forever.
Speaker 2:This is absolutely disgusting and spits on the work of every developer everywhere. Bold. John Carmack says, what? Question mark? This is impressive research.
Speaker 1:I love that because he's the creator of Quake. And there was an amazing there was an amazing meme that was like John Carmack holding a white monster being like, oh, you completely replicated exactly what I did? Awesome work.
Speaker 2:Based.
Speaker 1:And it's like, the developer himself is like, this is cool. But he did unpack it a little bit more, and so I wanna read through this. He says, I think you are misunderstanding what the this tech demo actually is, but I will engage with what I think your gripe is, AI tooling trivializing the skill sets of programmers, artists, and designers, and that's real. My first games involved hand assembling machine code. What a code.
Speaker 1:This is why he's one the greatest programmers of all time Right. And turning graph paper characters into hex digits. Software progress has made that work as irrelevant as chariot wheel maintenance. Yeah. You don't wanna be in the business of chariot wheel maintenance.
Speaker 1:Not a big industry today. But building power tools is central to all the progress in computers. Game engines have radically expanded the range of people involved in game dev even as they deemphasize the importance of much of my beloved system in systems engineering. Maybe first person to say that. Systems engineering, very, very hard, and it's a lot of it's a huge time suck to mean, when he built Quake, he had build the whole game engine.
Speaker 1:He had to build every everything, the idea of a floor that you can't fall through a floor or a wall. You don't wanna walk through the wall. You have to write all that code from scratch. Instead, now you just fire up Unreal Engine, and you get Fortnite out of the box, or you build in Roblox. Right?
Speaker 1:Yep. So he says AI tools will allow the best to reach even greater heights while enabling smaller teams to accomplish more and bring in some completely new creator demographics, people who don't know systems engineering or even programming, for example. Yes. We will get to a world where you can get an interactive game or novel or movie out of a prompt, but there will be far better exemplars of the medium still created by dedicated teams of passionate developers. And this is like the innovation concept, this idea that, you know, the the Harry Potter Palenciaga game will be the one that goes viral and gets a lot of attention.
Speaker 1:If you have distribution, you can capitalize on that. But, also, if you have a novel idea that AI couldn't think of, you will have a breakout success. And so we'll focus more on game mechanics, game Like, there was this game, Bellatro, that took that takes poker cards. And you're basically playing poker and trying to create, like, royal flushes and whatnot, but it adds all these crazy mechanics on top of it. It was a very simple game just designed in an engine.
Speaker 1:Not crazy on a technical level, but the game design was so so incredible and so novel that it just went massively viral and, like, the solo developer, basically. I think he had a few people on his team just printed and became, like, the number one game of the year or, like, of that quarter on Steam or something. Well, what should you do if you're printing, John? Pay your taxes. That's for sure.
Speaker 2:Put your sales tax
Speaker 1:on autopilot. Tax on autopilot. Spend less than five minutes per month on sales tax compliance. Go get started. They're back
Speaker 2:by what you're doing
Speaker 1:in your benchmark.
Speaker 2:Five minutes a month.
Speaker 1:You don't wanna be bogged down. And, I mean, this is true. Like, you wanna be focused on the innovation that your company is doing. You don't wanna be dragged into a bunch of unnecessary reports and and audit and
Speaker 2:You can be thinking your lucky stars that you're not a part of the trade war right now. Yep. But take an opportunity to get your sales tax ducks in a row. 25 states are now taxing software sales and Numeral helps you stay compliant. So just
Speaker 1:do it. HQ and check them out.
Speaker 2:Just do it.
Speaker 1:And I think that's a good place to wrap up. What do think, George?
Speaker 2:Yeah. Fun show, John.
Speaker 1:Great show.
Speaker 2:I enjoyed podcasting with you today. It was fantastic.
Speaker 1:Can't wait to do it tomorrow. I know. I was so worried. I I I checked the date in my intro and I was like, is it Wednesday already? It's not.
Speaker 1:It's Tuesday. We got three more days. Glorious podcasting.
Speaker 2:I know. It's gonna be good I have a feeling there there'll be more news this week.
Speaker 1:For sure. The the size gong hasn't rung its last gong sound. That's for sure. The gong The gong is still fresh. Keep ringing This week.
Speaker 1:Long as we're Yeah. It's great. Three
Speaker 2:That's beautiful.
Speaker 1:Three amazing series b's. We got all three founders on. I think we did great.
Speaker 2:Three of Three the Jaz, A series
Speaker 1:b? Oh, guess it's
Speaker 3:series b. More like
Speaker 2:a pre seed. Seed. More of a pre seed.
Speaker 1:Anyway, three big, three 10 plus million dollar rounds in the tens of millions. Great to see it. Lots of money flowing into startups that we love to see. All all working on very
Speaker 2:interesting things. Remain bullish on America.
Speaker 1:Me too. Me too. Anyway, thanks for listening.
Speaker 2:Have a loose faith.
Speaker 1:We will see you tomorrow.
Speaker 2:Have a great afternoon. Have a great afternoon. Cheers.