Jake & JZ

Two days after recording our last episode about AI, DeepSeek R1 was released, prompting some of you to ask us a set of new questions about how AI will affect the future of work, products, startups, and investing.
Jake is off this week, but we invited our cofounder and partner Eli Blee-Goldman back to help JZ answer these questions and — for at least a brief moment — get us caught up on the latest developments in the world of AI.

This week on episode 16 of Jake & JZ, we talked about:
  • Why DeepSeek R1 caused such a stir
  • How AI innovation can come from anywhere
  • Our 3 newest investments at Character Capital
  • Types of AI founders (with examples!)
  • Two important lessons from a Character founder
  • Doing it right vs getting it right
  • The downside of being able to build so quickly
  • Our recent Foundation Sprint Bootcamp
  • How we get better at sprints by learning from the community
Have a question for us? Send it to hey@jakeandjz.com and we’ll answer it on the show.

📩 Get an email about every episode at JakeAndJZ.com
💸 Learn more about our seed fund at Character Capital
🥸 Find Jake on LinkedIn and X
🤠 Find JZ on LinkedIn Blue Sky and X

Creators and Guests

Host
Jake Knapp
Host
John Zeratsky

What is Jake & JZ?

Weekly podcast about startups, design, marketing, technology… and anything else we’re thinking about. 🤓

Hosted by Jake Knapp and John Zeratsky, co-founders of‍ Character Capital and bestselling authors of Sprint and Make Time.

JZ (00:00)
Hey, so what did you think of the bootcamp yesterday?

Eli (00:03)
And if I had to describe it in one word, I would say energy. You know that feeling like when you see the chat, it's like Jake said, where's everyone from? And then it's just like, you can only just briefly look.

JZ (00:06)
Yeah.

Yeah,

yeah, it's impossible to keep up. It's so fun to see the enthusiasm that people have for, well, in this case, I guess I should explain what we're talking about. Yesterday we ran a foundation sprint bootcamp. So was the first time that we did any kind of interactive training about the foundation sprint, which we recently introduced with our new book, Click. And we offered this

Eli (00:20)
Yeah.

JZ (00:42)
workshop yesterday for people who pre-ordered eight or more copies of Clicks. So these are like super fans who are like really into sprints. Many of run their own design sprint businesses. They like change their careers after reading sprint and like learning how to run design sprints. So that's just kind of a special event for them.

Eli (01:01)
you know, I think the interesting thing is you said is like the community part, like that's, that's kind of insane. What was, what was your high point?

JZ (01:08)
Well, was the way we ended up doing it was kind of interesting. Jake had this idea at the last minute, was to instead of just show, instead of just teaching the method, he suggested, why don't we actually demonstrate a foundation sprint? So Kristen, who, you know, is on our team and Jake and I, we, the three of us, we pretended to be the founders of a coffee company. And so we actually ran through a full foundation sprint and then.

Participants had the option to either just kind of watch and listen or they could they could participate as well. We created a in the Miro board. We created another section where they could kind of play as, as we went. And so I thought that was cool because I thought that it created the surface area for us to not just talk about here's the method here or the exercises, the steps, but to talk about sort of do a little bit of inner monologue. like, well, here's what I'm thinking about when I'm identifying competitors or I'm trying to.

figure out where will be the most that was really fun. And the other thing that got me excited was there, you know, people have questions about things that we just don't know about. So for example, like, you know, people would ask how would you recommend selling this into a corporate client? And it like, I have no idea. Like, you know, I've worked at big companies, but like, I've never been that sort of consultant person who's trying to.

to close a corporate deal. And this happened with the design sprint and with the sprint book as actually learned a lot from the community. wasn't just us standing on top of this mountain, like sending down edicts. was definitely like a two-way or a multi-way collaboration, where as people went out and tried things, we learned from them. We incorporated those into versions, updated versions of the process.

That hasn't quite started to happen yet, but I can tell it's going to, and that was very

Eli (03:01)
Totally.

JZ (03:02)
right. Well, we've got you back for another podcast episode. Should we go ahead and kick it off? Awesome.

Eli (03:06)
Let's do it,

JZ (03:28)
Welcome to episode 16 of Jake and Jay Z, the weekly podcast about startups, design, technology, marketing, other stuff that we are thinking about and working on. I'm Jay Z, co-founder of Character Capital and co-author with Jake Knapp of Sprint, Click and Make Time. And if you want to get our weekly newsletter with the latest episode and some other cool stuff that we found, go to jakenjayz.com. Jake is off today.

But I'm super lucky because I've got my co-founder and partner Eli Blea Goldman here back again to talk some of the recent developments in AI and what that means for the future of how people will build products and build companies. And if that sounds familiar, it's probably because it was also the subject of our last podcast episode. Last time we had what I think ended up being a pretty interesting conversation about

AI and what was at the time kind of the cutting edge of new models, specifically some of the newer models that had been released by OpenAI. And then as like, tends to happen in the AI world, like two days after we recorded that episode DeepSeq R1 came out. the DeepSeq had made a minor splash back in December, I think with, with V3. That was the sort one that was on everybody's radar. And then they released R1 in January.

And the world just kind of like freaked out, you know, like the stock market sold off, like all the, you know, all the, VCs took to Twitter to pontificate about like what this meant for the future of and a few people reached out to us, especially after the last episode and they were like, Hey, we want to hear what you think of this. specifically you think of this Eli. so anyway, so we decided to reconvene here and do a follow up episode.

don't we start with just a little bit of background. can you kind of tell us like, where did DeepSeek come from? Like, why did it make such a splash

Eli (05:22)
Yeah.

Yeah, so DeepSeek came from a very large Chinese hedge High Flyer. And they had been doing all sorts of algorithmic trading over the years. And the founder of the firm is actually a big fan of Rentech, which is a hedge fund that has done extremely

JZ (05:43)
a legendary quant hedge fund and our geeky claim to fame is that we've co-invested employees or partners of Red Tech. He's now doing some VC stuff, so we feel like pretty cool by association.

Eli (05:57)
That's

right. Yeah. Yeah. And I remember when we, when we made that investment, we were both pumped. We were like psyched about it, but the rest of the world didn't seem to know to me about Rentech.

JZ (06:06)
Right, yeah, yeah,

it's a very specific Venn diagram of people who would be excited by that information.

Eli (06:12)
Totally, yeah. And the interesting thing about High Flyer as this large algorithmic trading fund is that they had acquired a lot of NVIDIA chips, specifically these, think they're called A100 chips, there were restrictions placed on them. And so they were able to get, I think it's 10,000 of these US restrictions hit. And they released this model that really how people think

on two ways. The first is the primacy of the very large US, mostly US based foundation models being the only players in the game. mean, there's a few others, but basically I think that was the common And the second was, yeah.

JZ (06:48)
Yeah.

Right. There was kind of this accepted

narrative that the only compete and hopefully to win in that market would be to raise billions of dollars to many chips as to hoard thousands and thousands world's best AI That was the archetype. And so this kind of tossed a bit of cold water onto that fire.

Eli (07:15)
that's exactly right. Yeah. And I think exactly the second point I was going to make, is cost. You know, I think there's like, thought there were these incumbents that were maybe they could be peeled off a little bit, but basically not. And then the just as you said, like everyone thought, man, you have to raise 10 billion, 100 billion, you know, trillion to to make a really great model. And what the results from deep seek seem to show were that's not the case at all.

JZ (07:20)
yeah.

Eli (07:40)
They claimed that they did it with 5.5 million and they did it 2000 chips of this H800 chip. And if you think about something like Llama by Meta, they had trained this gigantic model with like 16,000 of these H100, like amazing chips. So it seemed to challenge a lot of those notions. That was my take. What about you?

JZ (07:47)
Mm-hmm.

Mm-hmm.

Yeah. to be clear, there was some controversy. There continues to be some controversy about specifically how cheap was it. But I think even if it's off by an order of magnitude and even if it was only $50 million, it's still so much less capital intensive than a lot of the other models. Yeah, that was, think those, you those were the headlines for sure. I think there were some interesting, essentially product improvements and just to level set like

you know, DeepSeq is fundamentally like simple version is like it's a, it's an LLM chat bot, you know, it's, it's like chat GPT, it's like Claude, it's like Gemini. It's a tool where, you know, you can, you open it up, you can ask it to do things for you. You can ask it to research things, to help you write things, summarize, you know, it's, et cetera. It can help with coding, um, all the same stuff that LLMs are good at. So it's another one of those, just if you're listening and you're not really sure what we're talking about, it's another.

thing like ChatGPT. There was, I think, a pretty interesting product improvement that you kind of brought up even without, even pre-DeepSeq. You mentioned the last time we chatted about this, which was this notion of expanding test time compute. So basically taking more time, you know, the impulse with software and truly with any product or experience is always that faster is better, right?

You're putting a query into a search box. You're prompting an AI. Why wouldn't you want the update instantly, right? Why wouldn't you want the results to come back instantly? But in this case, taking a bit more time actually produces better results. And then the other really interesting thing was that R1 specifically, does the R stand for reasoning? that or not? that's right. Okay. Yeah.

Eli (09:53)
Gosh, or reinforcement learning?

JZ (09:57)
Well, coincidentally then, R1 also became known for showing its quote unquote reasoning. So you can almost see it think, you know, if you use it, it's exploring, it's kind of sharing some of its inner thought process, if you will. so regardless of cost, regardless of where it came from, regardless of anything else, those were two sort of improvements that a lot of the other bigger names

sort of raced to catch up with. And so I thought that pretty interesting.

Eli (10:26)
Yeah, it's such a fascinating one. And it actually cuts to the heart on a couple of the specialties, as far as I understand it, of what DeepSeq is doing. And one of those is this notion of the active parameters. So when you have these big models, you can think about the parameters as adjustable numerical values in the model that are learned from training

so these determine the outputs that you get. It determines the behavior of the model, et cetera. And while R1 has a lot of parameters, I think it's on the order of $600 billion or something, in any given moment when you request it, it uses many fewer active parameters than competing models, something like for a wood.

And the way that they do this is they kind of split it into expert modes. the model is trained to have an expert mode. So presumably, if you say coding question, it's not sending it out to the whole trained model. It's just sending it out to the coding question domain. So I thought that's kind of an interesting and way to go about improvements. I think that's led to some of the decrease in costs, which are crazy.

JZ (11:24)
and

Wow, yeah.

Eli (11:45)
I looked this up from a while ago and the API costs at the time, I'm sure they have shifted, the API costs that I saw reported were something like 14 cents for $2 for output per million tokens for R1 compared with O1 at the time, which was $15 input and $60 for output. And so if you look at that, man, it's like more than 100X cost difference for the input. So I think that's what had people concerned.

JZ (12:04)
Wow. Yeah.

Yeah.

Yeah, yeah. A couple of the things that caught my attention were that models are fully open source. As far as I know, possible that, well, I shouldn't speculate about things I don't totally understand, but like you can go to the GitHub page and like you can see code. like very, very They are also doing what I would describe as like building in public. You this is something you sometimes see founders doing, like really being super

Eli (12:13)
Thank

Yes, great.

JZ (12:38)
transparent about like what they're building and what they're learning. And I would say that DeepSeek is very much building in public. Like they're really transparent about what they're doing. to me, this kind of all added up into, you know, reinforcing the point, the very first point that you made, which was notion dominance by a few select players, a few chosen players.

don't want say it's over, but really there's, I think, a lot of that model now. And it points to something that we have seen with the companies that we look at and the ones that we invest in, where increasingly, we're seeing is AI innovations are coming from anywhere. we can talk about a couple of examples in a little bit,

There's a couple of founders that we know who are like in their early twenties who are working on models that are challenging stuff that's coming out of Google and, in terms speed and zoom way, way out, like, and you set aside, okay, like how much cheaper is it? is the product better, you know, showing it's sort of reasoning, it's thinking like,

Set that all aside, I think the big, big theme this stuff can come from anywhere. the world is far more distributed. The world of innovation cutting edge research around AI is far more distributed dimensions of geography, of age, than anybody thought before. And that's super exciting.

Eli (13:56)
Yeah.

is. Yeah, and what a great way to put it. It sort of sparked in my mind as you were saying that, each model release, intelligence is becoming more distributed than ever, but we're still seeing pockets of very deep expertise, very deep intelligence that is not evenly distributed, even with these big models. So that's a really fascinating dynamic, I think, that

JZ (14:21)
Yeah, right.

Eli (14:28)
Some people believe it doesn't exist anymore, but we're still seeing it.

JZ (14:32)
Yeah. So all right. Anything else? Was there anything else specifically about DeepSeq that stood out to you as kind of being noteworthy, newsworthy, or did we kind of cover the main factor? Cool.

Eli (14:45)
I have a few more things that I just want to touch on briefly.

So the first is that we should not be so naive as to take these numbers at face value, especially around the costs. So we should be clear that I think both of us are a bit skeptical, and that at least I am. I think you are as well. And this input can be used for training. And so I think this is a question that both and certainly like.

JZ (14:56)
Right, yep.

Eli (15:12)
nation states have to increasingly ask themselves, which is where are these useful queries going and what are people doing with that information? And so I think that has come to the fore in a much greater way with this model.

JZ (15:22)
Totally.

Yeah, I thought it was interesting to see few companies perplexity was the one that was most obvious to me, actually, because DeepSeek is open source actually integrating their model into perplexity, which is a US based company, running on in data centers, presumably in the US in a way that avoids those issues. But yeah, that that I think is a super important.

Eli (15:51)
And maybe to that end, actually, with the perplexity, I think there's this notion that I really believe that alignment should be viewed as iterative. And so by that, mean we typically think about, quote unquote, aligning an AI model as a discrete output. Like it's either.

JZ (16:07)
And

can you define briefly what alignment is in this

Eli (16:11)
Yeah, so I think in the context that I'm going to use it, and this will not be the universal definition, it basically means like, good for humans. That's what I'm going to say. It's like the like the my definition here.

JZ (16:17)
Okay.

Got it. Okay, yeah.

Yeah, so

like our interests are aligned to the interests of the AI and the are aligned.

Eli (16:28)
Yeah.

Yeah, But I think what the rapid pace of the development of these models has shown is that it really is more of an iterative process. That's how we should think about it, especially because they're off of each other. And one of the controversies was

whether they use this method of distillation in training the model. And so is a method by which you use sort of like the outputs and even the chain of thought of another model. And that's sort of the teacher model. And then you have a student model that learns from those outputs. so that's interesting because you can compounding gains in knowledge. You don't have to retrain everything from the outset.

JZ (16:52)
Yeah.

Eli (17:16)
But I think there's a really interesting question here, which is, does distillation get you closer to the ground truth of any given thing? Or does it have an error propagation mechanism in it? And I'm not sure, but I think that's probably a risk these models become smaller is used more deeply. I don't know if you have any thoughts on that.

JZ (17:27)
Right.

Yeah.

Yeah.

I don't, I agree that important question and a risk. Particularly as people continue to push out the edges of what's possible with AI and they try new applications, new ways of leveraging this technology. And we see it too. mean, there are ways in which we use AI where it's like, clearly that was useful and helpful and good and

you know, apart from the odd, you know, hallucination here or there, like mostly right. But then there's other times when we use it. And I just ran into this. Well, the, bootcamp thing, I tried to basically create an AI summary of the questions that were in the chat. there, like you mentioned, there was this like hyperactive chat feed of people asking questions and we were jumping into answer some of those in and,

That was a case used multiple different products and models and I spent about an hour working on this this morning and couldn't get a really useful and good result. And so I think that there are a lot of cases like this where as we continue to push further and further and try newer and newer things, obviously models will continue to get better and products will continue to get better.

Eli (18:38)
How interesting.

JZ (18:50)
But if they get divorced from the ground truth, I love that phrase that you use, if they get divorced from the ground truth and they start to almost become self referential and, you know, kind on things that other models, produced, you end up with these results that are, look okay. Like if you skim them, they seem reasonable and good. But if you actually understand the context and the meaning of the content and you look closely at it.

It's like not good. It's not valuable. It's not And so anyway, I don't know where that's going to go, but I totally agree with you that that's an important aspect and an important open question of this.

Cool. So should we pivot into maybe talking about what some of the impacts or implications might be for people who are building products with AI or people like us who are investing in AI products?

Eli (19:41)
Yeah, I think we should and I want to give the hardest question to you first, which is like, has this changed your view on AI investing in general at all?

JZ (19:52)
So in general, I don't know. I don't think so, not like big, big picture in general, but there's a specific type of company that we have invested in, in very limited situations, which is a of a research-centric AI startup, kind of a frontier research type company. And we have this framework that we sometimes talk about at Character when we're evaluating startups, which is of the four different strategies that AI founders can

And in reverse order, those products. So it's really just about, you create a great leveraging AI? It's still really hard to make great products. That's something that some founders really good at. Another is distribution. Do you know how to market, to sell, to distribute, to put a product, an AI product or technology in the hands of customers in a way that's going to really work for them?

Another is data. So do you somehow have access to a type of data, proprietary data in a particular industry or particular type of business that is harder for other founders to get access to? And the fourth one is research. Are you fundamentally doing cutting edge, new research, net new research on models that other people can't do or most people can't do? And the investments that we've made in that category are things like Phaedra.

which is a team from DeepMind who had built AI agents for controlling the cooling systems for Google's data centers and are now offering that product to other companies. Orbital, who has created models for designing molecules for new physical materials. Inductive Bio, who has models that predict how drug molecules are going to interact in human body, ahead clinical trials.

And in those cases, we've been willing to make those investments because the teams were truly world-class, truly exceptional, and they had done things like that before. So in the case of, well, really all three of those, but especially Phaedra and Orbital, the teams, when they were at DeepMind or they were in their previous they were publishing papers on this stuff. years before people were even paying attention to AI.

teams were already out there on the horizon, like doing this research. so that gave us, pay close attention to stuff, but we're, you we're not researchers, right? Like we're not academics, like knowing that they had done this before gave us confidence and allowed us to build conviction as investors. All of that is a sort of a long preamble to say that I think that thing that has, where I've been updating my worldview a little bit is seeing that such unexpected and such dramatic

Innovation and advancement can come from such an unexpected And combined with a few of the founders that we've recently met and have been investing in, it gives me a lot more confidence that you don't have to be credentialed. You don't have to be like the ex deep mind, the ex open AI, the person who's been doing this for a decade. can just be smart and, and scrappy and hungry.

Eli (22:47)
Yeah.

right.

JZ (22:58)
And you have the potential to, on that research frontier specifically, you have the potential to compete with some of the and most capitalized companies in the world. you know, I don't know that we're going to become like a deep tech, investor or anything like that, has certainly like opened my eyes to that style of investing in a greater way than they were open

Eli (23:07)
Yeah.

Yeah, I love that point. And I also want to encourage any founders or potential founders that are watching this to reach out to us, especially if you have these bold and ambitious ideas, because I think one of the things that you're seeing here, and you've probably heard in with Jake and Jay-Z in the past, and then also at our firm is just the three of us. we are nimble and we move fast as well. But if you're working on something really, really awesome, just like

Tell us about it. We want to know about those hard things because when I heard this 5.5 million figure, I I was skeptical, but then at the same time I was like, well, that's sort of like a chunky seed round. Like, what does that mean?

JZ (23:47)
you

Yeah.

Yeah,

No, that's super interesting. because we had in the past, we'd said we're not we're mostly not going to invest in foundation model companies because raising $100 million seed rounds, right? That's just clearly outside our focus area. But yeah, to your point, like, okay, we need $5 million. Like now we're getting closer to the typical range for a seed round.

Eli (24:21)
One thing it's highlighted for me, and I'm dying to hear your notion on this as well, Jay-Z, is when to utilize any given tool versus a different tool has become increasingly hard because the cost decreases so models improve so quickly that it's almost like if you wait another you probably will save.

you'll have saved like that week. So like, I think it's become increasingly hard to decide when to press go on like whatever you're building. I don't have a great answer for that, but I think that's something that founders need to think very closely about. And it probably gets to the heart your end goal. so I think like once hopefully you've run a foundation sprint or you run a design sprint, you start to get clarity on what you're building, know, what it might look like, and then execute on that.

JZ (24:43)
Yep.

Yeah.

Eli (25:12)
but just know that if you wait another week, there's the possibility that that execution could be even faster. Which is just sort of insane.

JZ (25:17)
Totally. No,

yeah, no, that's a really good point. it reminds me of there's an archetype of founder that we sometimes meet that I think of as, I mean, these people are usually software engineers, but I think of them as AI practitioners. They're they are really good at understanding exactly what you're talking about, which is which existing tools to use.

which things to build and where it makes sense for you to invest or not And there was, I'll try to find it and include a the show notes, but one of the founders in our portfolio, he posted something on LinkedIn that was like, AI has helped me build so much faster than before, but I'm also cognizant of that the three to 5 % of whatever I'm building that I know if I just wait,

a couple of months, like I'm not going to have to build that anymore. And so there's this almost like, it's almost like a capital allocation, game of like, yeah, where do I invest versus where do I outsource and understanding how that is shifting over time? So yeah, I agree that, that that has become a really important almost skill, you know, almost it, it's, it's very nascent, you know, and we don't even really

Eli (26:30)
Yeah.

JZ (26:33)
the terms for it because it's not the same as like just a pure software engineer. And software engineers obviously, they always use off the shelf tools that are building on lots of frameworks and infrastructure and things like that. something feels a little bit different, I think just in the capabilities and in the pace of some of the building blocks people making products can use today.

Eli (26:34)
Yeah.

Totally. I think about in the past software era, you often had to wait until a new release came out, a new OS or something. You had to wait. You had to buy it. Usually it would be more expensive. Usually it would have more features. But there was sort of a definite date that it was going to happen. So you knew, well, I should wait until this happens, or I'm going to be building off of this version of Flash or whatever you're doing this.

JZ (27:11)
Right.

Yeah, or like a particular

database that you've chosen or yeah, where, know, what cloud environment you're going to host in and whatever.

Eli (27:24)
Right.

Right, exactly,

exactly, yeah. And now we're in this weird world where every day it's cheaper and better. Like, that's odd.

JZ (27:38)
there, I sort this a little bit, but I thought it would be interesting to talk just anonymously, we won't name names, because, I think most of these, if not all of them have not really been announced yet. our last three investments. they each demonstrate

a very different but related implication or perspective on what's happening. The effects that we've just talked about, better, faster, cheaper, distributed innovation, all these things that we've been talking about as implications of DeepSeek. The last three of our investments, I don't know, this framework just sort of snapped into my head recently and I thought it might be interesting

think out loud and share with you and get your feedback on So the first company, the most recent one, I briefly mentioned them before. These are like, literally like they dropped out of college. They're like in their late teens, early twenties, and they recently produced a model won't say exactly what it does, but a model that is competitive with models from Google and other large

companies some specific tasks. Not like an LLM, it's more specific than that. these are really, really smart founders, course, very brilliant engineers. they're 20. It's not like they've spent their careers on the forefront of this world and have been credentialed and published and all these things.

Eli (28:45)
Yes.

JZ (29:09)
So I think of that as sort of the distributed innovation kind of implication is like, for, you know, for every one like this, there's a hundred more probably that are doing something similar and interesting that previously felt like it was the, the, the sole domain of either well-capitalized, you know, big companies or the credentialed AI researchers. so anyway, curious if you have any takes on that as like one, one archetype, one example.

Eli (29:38)
I think you're exactly right. And what sprung to mind for me is how much this team also built in public, like truly built in public. And that was, that was the thing that allowed us to get an understanding of why they were exceptional. And so they weren't making promises. they're not marketers.

JZ (29:47)
Totally. Yep. Yep.

Eli (29:58)
They're just builders and executors and we could see that. And I think that's the signal. Like that was the credential for me versus like trying to say, this person worked at Google in the past or they went to all these great schools and then they did the typical thing. Like they didn't need to do that. They just sort of built and showed it to us.

JZ (29:58)
Yeah, yeah.

so let's call that company A, know, the first example. The second example, we literally just wired last week, this is a very fresh investment, and very different than company A, we'll call this company B, very different, but I think also representative of some of the implications of DeepSeq.

Eli (30:23)
Okay.

JZ (30:41)
And this company is building in the tax space and they are are the prime example of where deep domain expertise really matters. They are not going to be developing, you know, new models, new foundation models at the frontiers of, research, right? They're going to be using tools that exist. They're going to be using models that exist, but they are solving problems that the typical YC founder.

could not just get up to speed on in a couple of weeks or a couple of months. These are things, these founders have literally spent decades in this domain, learning about the ins and outs of taxes and regulation and accounting and finance and all sorts of related intricacies that are not easy to pick up. And so I thought that that was a really interesting archetype and it's one of the patterns that we look for

you understand how this technology, if you understand what we were just talking about, like when to build versus when to adopt, to put those pieces together, which you can pair that with deep domain expertise, expertise and experience that is not easy to replicate, becomes a really powerful combination. And the more that the tools get better and faster and cheaper,

Eli (31:48)
Yeah.

JZ (31:59)
the more important it will be to have domain expertise because the obvious solutions and products will just be that much faster and easier and cheaper to build.

Eli (32:09)
Yeah, you're so right. And I think this Venn diagram of these type of people, it's like the people that have the experience to do this is like probably like very tiny. And the people that also have like, sort of the vision or knowledge, it's like, even even tinier, like when you do that overlap. So there has to be very few people like this. I think this is a rare type of founder, like sort of the rare Pokemon, you know, that

JZ (32:20)
Yeah.

Eli (32:38)
like suddenly appears, but it's fascinating to see how much accumulated or compounded knowledge these two founders have. I think that's rare. And then it's even rare that they were like, let's make something new utilizing AI. Like, it's really, really interesting.

JZ (33:00)
Yeah. Okay. So, okay. So in this little framework we're laying out, company A was two young college dropouts who are challenging big tech, for doing frontier research in AI. Company B is experienced engineers who founded a company. They have very deep and difficult to obtain domain expertise

tax, is the world that they're building not going to be doing cutting edge research on the AI side, but they know how to build on top of the tools. They know how to use what exists in the world. Company I think, our last investment of 2024 and is health care founder who think, in our portfolio,

probably one of the best and the top three of the AI practitioner archetype that I described. certainly also has domain expertise. He's also a brilliant engineer. It's not as if, I don't mean to him in any of those other categories, but he is, and he's the one who I mentioned with the LinkedIn posts that I'm gonna find and link to, he is so tuned into how can I use what exists in the world today?

Eli (33:51)
Yes.

JZ (34:11)
to build than anybody thought was possible. And then as I'm doing it, how can I make sure that I'm not investing today in building software that is gonna be off the shelf available a month from now? And understanding where that boundary is and just being really efficient with his time and his resources so that he can focus not building a bunch of stuff from scratch, but he can focus on

What think is the hardest and most important thing when you're starting a new company, which is what is the product? Does it work for founders? Is it going to click? How do I get it into those founders hands? He is, he's using all of the tools, all the infrastructure, everything that's available in the world today sort of, up the pyramid to focus on that one, essential foundational element of company building.

of watching him work is just kind of a master class in how to do that. So I thought that was another interesting example company building is changing as a result of things like DeepSeq and other new and exciting models that are coming out seemingly every week.

Eli (35:16)
been cool is how much, sprint and click and just that all of those methodologies have helped guide how he sort of reined in these various aspects of company building, because at the very told us, well,

JZ (35:28)
Yeah.

Eli (35:31)
I want to work with you guys because I want to do it right the first time. Now, doing it right the first time does not mean getting it right the first time, but it means approaching the process from the correct way. And I feel like he's at this point now where he's like, the sled dogs are like guiding him and the sled dogs are like the various AI models, but he's certainly like, you know, guiding the sled and like, hopefully, hopefully somewhere in this analogy, there's, there's the path that we laid out

JZ (35:38)
Right. Yeah.

Yeah.

Eli (35:57)
This is one of the traps actually of utilizing AI as a builder is that you can end too many paths opened up and you say you're doing this thing, this thing, this thing, this thing.

And they all seem very alluring. They all seem, probably it's a compounding error thing. They all seem if you don't have that focus of, where are we actually can end up in a really terrible spot with a bunch of cool looking pieces, but nothing that's actually getting you to product market fit.

JZ (36:25)
Yeah, is super insightful. I there is so much enthusiasm throughout the certainly the technology world, but the business world and know, the world in general, there's so much enthusiasm about AI right now that I think there's a lot of false lights at the end of those paths. You you spend a little bit of time down one path and there's going to be some people who are probably pretty fascinated by what you're doing.

Are they gonna, are they customers? Are they gonna pay? Are they gonna commit? Are they gonna change their workflow? Harder, that's harder to tell. That's harder to figure out, but they're probably gonna be kind of excited about it. And so you get these little glimmers of, I think there's a light down there. what about over here? I'm gonna build this. I can, I can pivot so fast. I can build new things so quickly that without some type of framework for deciding which of those paths are the most promising. And then very quickly,

Eli (37:04)
That's right.

JZ (37:17)
improving your vision to see if that is in fact a light at the end or if it's just an optical illusion, you can end up just kind of flailing, moving faster but not always in the right direction.

Eli (37:26)
Totally.

exactly

JZ (37:31)
All right, cool. A couple of things I just wanted to bring up with you really quick before we sign on Grok.

Eli (37:37)
I never would bet against Elon Musk. Like no one should bet against Musk. I think that's been proven over and over again, certainly as a founder. And I was looking at chat bot arena And by the time you all see this, who knows where chocolate will be, but it's number one right now. It's broken 1400 on the arena score, I believe for the first time. And so this is a really good result and for X.

JZ (37:39)
Yeah. Yeah.

Eli (38:04)
And again, it shows that the pace of improvement, certainly on and coding and problems, it's only increasing. kind of remarkable. If you look all of the top models right now, if I scroll through, they're all pretty darn recent releases, like past couple of months. It's pretty insane.

JZ (38:21)
Yeah. And Grok is interesting

because, correct me if I'm wrong, but I think Grok is like, it's like the anti deep seek from a, capital use and sort of resourcing standpoint where they just threw money at it. They're just like the most GPUs, the, the most sort of compute time, and just try to like climb the scaling wall. Is that, is that about right?

Eli (38:45)
actually don't know. I'm gonna be completely transparent. I'm not sure on this part of it. Yeah, I don't know.

JZ (38:46)
OK. OK. Yeah. Cool.

All right. Well, I'll look up some sort post or something and link people can check our work here, check our then finally, literally when we signed on to record this, you told me about something super cool that I literally hadn't seen, which was Google released a product or a suite of products. I'm not entirely sure.

called Co-Scientist, AI Can you describe that briefly?

Eli (39:16)
Yeah, and I also literally just saw this before we signed on. So this is real time. And so my understanding is that it's a system that uses multiple AI agents in conjunction with a scientist to sort of come up with and figure out research goals. And so if you recall last AI podcast, when we were all talking about what's next, we talked about this notion of

multi-agent interaction. So we had chain of thought where we let models think longer. And then the sort of that we saw is this notion of like, well, what happens when you let models or agents work together? And I think that's what you're seeing here. There's some recursive elements where there's like generation agent, there's a reflection agent, there's an evolution agent, few more. And then you sort of rank them and it's like the self-improving loop of results is how I understand it.

JZ (40:07)
Mm.

Eli (40:09)
is interesting because I think this will help ways to figure out those ground truths. Again, this isn't a training question, this is a results question, but it will be very interesting to see what the actual implications of this are. But I think that as we think about increasing scientific discovery, is math comes first, then science, then everything else, feels

JZ (40:33)
Yeah.

Eli (40:35)
quite weighty on the science part of things from what I saw.

JZ (40:37)
Yeah.

Yeah, cool. OK, so for one brief moment in time, we are caught up on the latest unless something was announced in the last 45 minutes while we were recording. All right, should we call it here? Cool. All right. Yeah, thanks, fun to chat about this stuff with you. Thanks for coming back to help shed some light on what's going on in the AI world. for listening. Thanks for watching.

Eli (40:51)
think so, It was great.

JZ (41:03)
Head on over to jkandjz.com if you want to get weekly email newsletter with the latest episode. And if you have a question for us, send it to hey at jkandjz.com and we can answer it on a future episode. Thanks everybody.