Weekly podcast about startups, design, marketing, technology… and anything else we’re thinking about. 🤓
Hosted by Jake Knapp and John Zeratsky, co-founders of‍ Character Capital and bestselling authors of Sprint and Make Time.
JZ (00:00)
It is so cold here right now in Milwaukee. It's, up to a balmy what? 10 degrees, 15 degrees. Yeah. But it's been really cold. Eli, did you go for a run today?
Eli (00:12)
I did, I cursed
myself as I get out there. I have on the full ski mask, I look just like a psycho, but it always feels good to get in just like a couple of reps before the day, especially if I'm chatting with you guys. I feel like I have to like get out the energy. Now I'm very calm.
Jake (00:26)
What's your
warmup regimen? Like that's pretty cool. That's not gonna pull a muscle or anything at 10 degrees.
Eli (00:34)
Yeah,
I don't know. know, I feel like as soon as I hit 40, then I'll worry about it. But for now, I just, you know, do the stretches like.
Jake (00:41)
Okay, that's, that's
the, see. I see. That's a, I just revealed something about myself. yeah. Okay.
Eli (00:48)
you
JZ (00:49)
once over 40, it's like spend 30 minutes warming up, stretching, five minutes working out, 30 minutes cooling down and stretching again. It's like
Jake (01:01)
like the taxi,
you you get on the plane and it's like, okay, yeah, it leaves at 6 0 8 PM, but we're taxiing for 45 minutes. Yeah.
JZ (01:03)
Yeah, exactly. Yeah. Yeah. You're like, yeah, here we
go. Like let's, let's work out, but first let's stretch for a good 20, 30 minutes.
Jake (01:15)
Well, you guys were all here all three of us should we record a podcast
JZ (01:18)
Yeah, we got Eli to come back again. Let's do it.
Jake (01:21)
Let's do it.
JZ (01:44)
Welcome to episode 15 of Jake and Jay Z, the weekly podcast about startups, design, technology, marketing, other stuff that we are thinking about and working on. That's Jake over there. I'm Jay Z. That's short for John Zyratsky. We are co-founders of Character Capital, authors of Sprint, Make Time, and our new upcoming book, Click. And this is our podcast. If you want to get a weekly newsletter that has links to the latest episode and some other cool stuff we've been looking at.
go to jakenjayze.com. Today we have a very special episode. We've got our co-founder and partner Eli Blee-Goldman back. Eli, thanks for coming back. The last time we recorded with you, it was super fun and it was one of our most popular episodes. So we're just gonna keep having you back to juice the numbers, make sure that we don't fall from relevance. So thank you for returning.
Eli (02:39)
for having me guys. I'm so excited about today, truly.
JZ (02:42)
We're going to talk about AI, which might seem like a little bit of a left field topic, but it isn't because at Character Capital, our VC fund, pretty much every company that we invest in is doing something with AI. There's sort of the spectrum from like there are people who left Google DeepMind, who are doing cutting edge research on applications of AI, all the way to people who are sort of building regular
software, web apps and that sort of thing with a little bit of AI kind of added in. But AI is of hard to avoid. It's kind of this really important new wave of technology that a lot of people are excited about, including us. And Eli, you follow this stuff super, super closely. I I'm always amazed by the papers you're finding and the people you're finding who are working on really interesting AI research. And right before Christmas break,
You had seen the announcement of, of three. So open AI's newest model three. like previewed it and provided like some, cool demos and you texted us and you were like, Hey guys, like while you're on break, like think about, know, how this might change the way that people build companies and the ways that we might invest. then like a week later, you texted us again, you were like, guys, there's like, there's all this smoke about big leaps of progress toward AGI.
you know, sort of artificial general intelligence. And you said, again, we should really like talk about this. We should think about does this change fundamentally what we do and how we think about building software companies? And so I, I'm sort of curious to ask you given how closely you follow this stuff and how much we talk about it, what felt different to you about those announcements? Like, why did those sort of like clear this threshold of like,
I have to make a special point of like texting these guys to put it on their radar.
Eli (04:31)
Well, feel like latent thought and the ability to just like think about things without having it be time bound is so important for how we work at Character Capital and how we sort of process information. And funnily enough, that was a very similar breakthrough that both 01, 01 Pro and 03 had that really got me thinking. And one of those key breakthroughs was this notion of
scaling test time compute. So essentially, once you make the request and you have the model do something, having it think for a bit longer yields these really, really amazing gains. And the news story.
JZ (05:08)
So slowing it
down actually makes it better, right? It's like, you know, think people always assume they want instant response, instant gratification, but their benefits to as with humans, with software, in some cases, things down a little bit can make them a lot
Eli (05:24)
Yeah, and or just giving them more time. So literally letting the model cook, letting it stew, letting it work, that really yielded amazing benefits. And so much of the news stories over the past few months that you may have seen in the Wall Street Journal or other articles had been focused on this notion of a scaling wall. So basically, is there enough data? Is there enough compute to train these gigantic models during pre-training and get
JZ (05:29)
Yeah.
Yeah.
Eli (05:51)
get the advantages that we had seen in the early days. And those models weren't, they weren't proving out. So the Wall Street Journal has this very large article that comes out and it says the Orion model by OpenAI tried to have more data, that spent a ton of time scaling compute. Like they're not seeing really good benefits. And part of this is because, you know, it's sort of like a power loss. So you get a lot of, you get a lot of benefits in the early days, but you wouldn't expect those to continue forever.
JZ (06:20)
So just throwing ever more data at model pre-training isn't producing the same leaps that it was before.
Eli (06:26)
Yeah, exactly,
exactly right. And yet, when I saw the 03 announcement, I was like, holy crap, they had this one, they do this sort of benchmarking, they test the models on a variety of different topics. And one of the things that blew my mind was they have this thing called the graduate level Google proof Q &A test. And so it's a multiple choice test like exam for physics.
JZ (06:47)
yeah.
Eli (06:53)
PhDs, like super, super hard questions that, you know, the normal person could never ever get. I think that it was chat GPT-4 when it came out in March of 2023, it got like 35.7 % of these right. So it was super low. And the 03 announcement, they said that they had like 88 % correctness. And just for comparison, like if you're a domain specific PhD,
you would get maybe between like a 65 to 70 % on this. And so it wasn't just that, it was the notion that even if no additional gains came from any model that was being developed right now, like let's just say we we bow tied it, know, O1 Pro, Claude, we're done. was just shocked at how many things we can do in the world today that we haven't been able to do in the past. That just blew my mind, like.
JZ (07:26)
Amazing.
We're good.
Eli (07:50)
It's it's wild.
JZ (07:51)
Yeah, yeah, interesting. Well, I know that we could talk a lot about what those new things are and we will, we'll get to that. Like what are sort of, you know, in an 03 future and an AGI future in a world where you sort of envision a piece of software that you can ask for anything, you can instruct it to do anything for you. A lot of things could change in the world, a lot of things will change. But I thought it might actually be kind of interesting to start with.
What do you guys think stays the same as AI continues to advance?
Jake (08:23)
Well, I can give you kind of a simplistic way of thinking about it, which is how I think about it for better or for worse. And it's, that, you know, if you're working on technology, they're simplistically speaking, there's like two modes you could be in. You can be in a mode where you're using the tools of today to build things that, just kind of follow from what, what's available today. And so if I.
If I write a document in Microsoft word or Google docs, if I, you know, post something on LinkedIn, if I, create a new website using Canva or Squarespace or, you know, know, Webflow or if I write the code to make a website or like whatever, but I'm using the tools of today to make a thing that just sort of follows naturally from, what exists today. You can create businesses doing that. You can.
You know, iterate features. can sort of keep up with the competition. And then there's another mode you can be in where you're really like, you're really like straining at the edge of what's possible. You see something that you want to do. You'd see a problem you want to solve and it just doesn't quite work yet. And you've got to find a way to get there. Now.
I'm kind of describing the same thing because you can only use what's available to you today to do that, to push against the boundaries, but it's a different mode, right? It's a different mode. And that's the mode that a startup is in. They're trying to create something that doesn't exist today. they're, well, the kind that we're interested in, they're up against the edge of what technology allows. And so they're saying like, Hey, we believe technology will allow us to do this thing. That data will allow us to do this thing, but we have to make it happen. It's not just going to, it's not something we can just sit down and punch out.
And in that world, you need, if you're going to have success, you need to have people who have extraordinary capability. They're, they're great coders or they're, you know, they're great products leaders. They're great marketers. They're great at all these things are great at seeing what's possible. You need, so they need great insight and they need to have this, this really powerful motivation to do this thing. That's really hard. And.
You know, they need to be going after an interesting opportunity. And then the magic works when they are right. That opportunity that they're going after is an interesting one. And they're right that it's possible for their team to push against the boundaries of what's possible, make technology work and deliver it. And I just think fundamentally like that doesn't seem to have changed across all of the waves that we've seen from like.
you can make software yourself to like, can make websites yourself to, you know, can do make software on the web to, can make software on these platforms for like your phone or like whatever to. I think what, what's now like, I guess the early stages of AI led products and there will be new stages that will disrupt everything about the way we see the world and the way people build products. But it'll, I think it'll be a while before that changes. The question is, does AGI change that? Does that make it so that.
anyone can change the boundaries, reshape the world, godlike, create a product that was unthinkable before. it's hard to know how we would operate in that world where anyone could do anything all the time. So I'm inclined to stick with the model of we need people who can do things the old way, they'll just do it with the new tools.
JZ (12:03)
You
Yeah. So it's like you're saying, I, I'm understanding correctly, like there's, there's sort of always this gap between what is easy to do with the tools today and, what is harder to do. And, and founders, people who are building companies and products, they're always pushing on that. That, you know, other side of the gap and they're always trying to build something that's like not it's it's possible, but it's not easy. It's like not the, the sort of default thing that the tools spit out and
Jake (12:32)
Yeah.
JZ (12:36)
one of the things that's happened is the number of people who can build something has gotten greater, right? So like when we started building websites, like there weren't very many people who were building websites. it was simpler. Ironically, it was a lot simpler to build a website, but there weren't a lot of people doing it. And like now there's, there's a lot more people doing it because you have Squarespace, you have whatever tools that you're using for it. so like, does, does that gap eventually collapse?
Or does it always stay a gap? I guess and you're sort of arguing that that like there's always a gap there
Jake (13:07)
Yeah, wherever the default is, what the tools deliver automatically, there are always going to be people who want to push into the gap. so, you know, that's just what does that look like?
JZ (13:12)
Yeah.
Yeah,
Eli, what do you think? We have these ultra capable models. We have these like amazing tools. We maybe potentially have AGI at some point. How are you define that? Like, what do you think stays the same in the world? What will always be true?
Eli (13:30)
Yeah, okay, so I think people are always going to want to build and share stuff, whatever that is, and that you want to do that with people that are aligned with you or agents that are aligned with your vision, and you want the feeling of agency. There's all sorts of things that people do better than you in the world right now, but that doesn't preclude you from doing those things and enjoying them.
There's someone who's a much better cold weather runner than me, but I still run because it's important to me. And then finally, I think you still need taste and you need to have some sort of aesthetic judgment. And particularly, I was thinking actually about this notion of agency, the feeling of like doing great work and then taste with you, Jake. And maybe some sprint super fans know this, but you actually have a shared
interest with me that goes way, back to your early days in old masters. And I was wondering if you could just like tell us a bit about that because I like this is a part that some people may know about, but they may not know your background.
Jake (14:37)
Painting, yeah, I studied painting and art in college before, well, I was sort of a computer nerd before and during college, but went through this misguided period of thinking that maybe I was actually gonna be an artist. And kind of came to the conclusion that no, what really floats my boat is computers and building stuff with technology. But I studied...
in college in Rome for a while. in this sequence of classes that we were doing, we would go and visit sculptures by Michelangelo and Bernini and paintings by Caravaggio. And that stuff really made a big impression on me. And it made a big impression on me in some different ways. And one of the ways was this,
bar for creating art. If you think of yourself as a potential artist and you look at the paintings of Caravaggio, who if I think Caravaggio is maybe many people know him, but I think many people probably don't know him. He's not as famous as Michelangelo and, and, know, Leonardo da Vinci and many other folks, but do yourself a favor and look up some Caravaggio paintings online. And also like in person, of course, as with any painting there.
It's just a totally different experience. They're that much more impressive, but Caravaggio kind of comes after like sort of, you know, later Renaissance. he's, he's building on the work of, of a lot of these, these more famous folks, but his paintings just have such emotion in them and so in their dark too, the guy was like a murderer. Like he was really kind of an interesting person himself with this, this kind of violent and chaotic life. And his paintings are super dark and super.
human and super like the motion is just really powerful in them. You look at that, you're like a, you know, 20 year old Jake or whatever. You look at that and you're just like, huh, like maybe I, maybe I should find a new vocation. Maybe that thing that I really liked doing with computers makes a bit more sense. You know, that was, that was kind of a beam of light that's shown through the church window where the Caravaggio wasn't hit me. Like this is not my calling. This guy kind of already, already nailed it.
Eli (16:48)
So I love
this story, though, because with both of your design backgrounds, in the sense of what does it make sense to have an aesthetic taste, to have a customer-focused taste? How do we think about building products that resonate with people but also have inherent worth and sort of value? I think that stays the same.
even more important now. And it really speaks to the work that we do at Character with our early stage teams where we're working hand in hand, you know, with that sense of agency, with that sense of trust and helping them to think through like, what is the best way to execute your vision? And Jake, you also had this really audacious goal when we did our quarterly, you know, in San Francisco, the three of us get together, we talk about the future. One of the things you said was, if people used Sprint,
and they were able to like effectively use that and feel really great about it. Even if nothing big ever came from it, like that would be a win. And I feel like that stays the same. And again, that's another thing that's like really important about doing work that's meaningful with people that you like in a focused way. I think it stays the same. I don't know. What do you think, Jay-Z? that,
JZ (18:04)
think a lot of people feel that life has become less meaningful in ways over the years. so doing things that feel like they matter, creating things, you know, even if they don't become famous, they don't become popular, even if they don't reach some concrete goal, doing things, making things.
feels good, it's important to people. But I sort of wonder, when you think about using AI tools to make things, maybe the way the future plays out is that you get that feeling of making by asking an AI tool to go make something for you, right? But there's also the risk that the developers of those tools,
they remove too much of the meaning, they make it too easy. And so what you, what you really want is to like make a cool app that you can use to meet up with your friends. But if like, all you have to do is, Mention that to an app on your phone. And then it makes the app instantly, maybe that feels less good, less rewarding, less important than if it actually took you a bit of time to sort of struggle and go through it and figure it out and try it and iterate it.
Eli (19:06)
Interesting.
JZ (19:12)
So I think that that is one of the you know, truisms of like the human experience. Yeah. But, so it's, yeah, that is, I was, you know, I was to be, in full transparency, was imagining some much more prosaic things that would stay the same. Like, I'll tell you what I wrote down. people will always need help learning about the best tools to use. in the world.
Eli (19:18)
Suffering brings meaning.
JZ (19:39)
there's a lot of ways you could solve a problem. You always need help. always need marketers, sales. You always need people who are going to be like, Hey, this is the best way to solve that. people always need help using those things. People always, you know, they need some sort of support. need tools that are intuitive. need instructions on how to use technology. most of the time people don't want to switch to new tools. Like most of the time things are working. Okay. People are
maybe not satisfied with the products that they use, but they're like, OK. people will always hire other people or hire companies, hire products to do things for them who are better at using those tools. So those were some of the things I jotted down as just being very concrete elements this world of
of product design and development that I think are going to stay mostly the same in the coming years.
Jake (20:33)
Well, I would add to that people are always going to need to feel agency in their lives. And that's a huge deal. There's a study out there and I, I don't know. I can't recall. can't cite it, but I think this just will sound true. So I believe it. And so it doesn't really matter who did it or what the sample size is. The study is folks who are using AI tools, knowledge workers using AI tools, they get more done.
JZ (20:36)
Yeah.
Yeah
Jake (21:01)
than their peers who don't use AI tools, but they have lower job satisfaction. And that's a huge issue. And it's an issue with all kinds of technologies. So they enable us to get more done or to do things faster, but they often, if they're taking away some agency or taking away some of our humanness, they're gonna lower our satisfaction. And just as standing in...
JZ (21:07)
interesting.
Jake (21:27)
the church in Rome and looking at the Caravaggio painting, I realized like me building, pouring effort into this, doesn't yield, I don't think it's gonna yield the right kind of satisfaction because if what I want is to be Caravaggio, it's not gonna happen. But I get that from pushing pixels and from writing code and from watching people use a product that I created. And I felt that that's what I'm gonna go after. I was going after agency and...
JZ (21:44)
Mm-hmm.
Mm.
Jake (21:52)
So many things I think that have become really successful, big products have taken agency away from us. think Instagram takes agency away from people or it's a very passive experience for using Instagram or TikTok. Most of the time you might create something on there, but a lot of that experience, extremely passive. so the extent to which tools can create agency, help you take care of the stuff that, you know, is maybe lower agency for you, but
but actually open a path where you're doing things that, yeah, where there is suffering or difficulty or struggle, that's really important. And it's not easy to say what the form of that will look like, but I think that's at the heart of what will make these tools successful for us rather than tools that own us.
JZ (22:40)
This makes me think of the work that we did with inductive bio, which is one of our portfolio companies and they have some AI that will predict which drug molecules are most likely to sort of work well in the human body. So you can, you before you go to all the trouble of running clinical trials, you can sort of do a prediction, kind of a preview of what's going to happen. And I remember when we ran a design sprint with them, this was a couple of years ago at this point, their product was quite early. They had basically built like kind of the backend, like the sort of
the brain of the system, but they hadn't really built the interface. One of the problems they kept running into was like the, their customers who are chemists whose job is and has been for decades, like designing these drug molecules, they felt, I think to use your word stake, like they felt a loss of agency because there's like a piece of software that's going to do some, going to do part of their job. And one of the things that was a big breakthrough for them was not to just have like, you know,
You could imagine like a chat bot version of this tool that's just sort of like, tell me what disease you want to cure. And like, just like gives you the molecule. mean, to be clear, that's not what inductive bio does, but sort of like the extreme, you know, futuristic version of this, but like what worked for that company was, was a tool that was very familiar to their customers. was the type of sort of interface they were already using, but then it like, it's supercharged and it's super powered it so that those customers were still in charge of.
the work that they were doing, but they were being more effective. They were having more success because of the AI that they were using. And I hadn't dotted it through this lens of agency before, but it's kind of an interesting way to look at it, I think.
Eli (24:23)
What do you guys think changes? I think that was the follow up, Jay-Z, that we had chatted about.
JZ (24:30)
Yeah,
so the fun question. Yeah, Eli, when you think about the latest round of developments and you get excited enough to text us during holidays and say, hey, let's spend some time thinking about What are the possibilities that you dream about?
Eli (24:47)
OK, so I have three categories I'm super interested in. They are not super connected. The first is just generally from an early stage company perspective, I think you can't be boring. The thing that will change is that the notion of just doing another SaaS company. We sort of joke about this all the time, SaaS is dead. But it really is dead.
JZ (24:51)
Yeah. Okay.
Eli (25:11)
This is not the world that we want to live in, is like some peeling off some small part of like a revenue stream of all the Fortune 500s to automate some tiny piece of their tasks. Like that is just not as important as it was 10 years ago.
JZ (25:26)
Yeah,
or even saying like, this is like that product, but it's a little bit better in this way.
Eli (25:30)
Yeah, right.
I think you're going to lose it from competition. You're going to lose from just being one of many. And I don't think that the numbers will work at the end of the day for those.
JZ (25:39)
And is that because
Jake (25:40)
Can you identify a few
you, you identify like a few companies from the past decade who were this kind of company who had some kind of success that you don't think will like that kind of company won't exist anymore?
Eli (25:51)
Well, I think about, task automation in a broad, not in an AI sense, but like some sort of business process outsourcing slash task automation for something in like healthcare. Like there was a bunch of companies that sort of worked on task automation for revenue cycle and they did 10 to 30 million in ARR based on some sort of uplift that was like very incremental, know, six, 7%.
better like revenue, 6 to 7%, faster time. I think those companies are most at risk. And I also think that I wouldn't want to be a later stage investor right now. I think that another thing that is very connected with that, that also changes is early stage has become way more important than it was 10 years ago, because you just don't need as many people as you used to. And so I think that's why getting it right in the early days.
doing a foundation sprint right off the bat, hopefully working with us at Character Capital, these are of ever greater importance because once you're sort down the river and really going at it, the options are just much broader today for what your company looks like than it was in the past. So I feel like naming names is a little bit rude, but that's my general take, Jake.
Jake (26:50)
Good plug.
JZ (27:11)
is it specifically about the most recent waves of advancement in AI that caused that change? I could argue that that has always been true. You never wanted to just build a product that was incrementally better than the other one. You always wanted to do something that was radically different. And when you look at a lot of the huge successful companies of the last
Eli (27:21)
Yeah.
JZ (27:33)
15, 20 years, lot of types of companies that we're all familiar with, some of which we've had the opportunity to work with, they all did something that was radically better. So what is it about this moment in time that you think amplifies that phenomenon?
Eli (27:49)
I think that it's the sense that so much of the work that we do with software can actually be automated or at least done by a layman. I think that that's really important. And I think what we're going to see this year and next year at later stage companies, but certainly at the outset for these early stage companies is AI working better together, more seamlessly together. So it's not just like you prompting a random model.
to do something and then you going and putting it in Notion or something or like going back and adding it to your CRM and HubSpot, like whatever, it's going to be much more integrated and connected. And I think the big unlock for future companies and hopefully the companies that we're working with is that same power that we've seen in expanding inference and like letting these models think, well, you could also say, what if we make models work together and they're sort of talking to each other and spending time thinking?
JZ (28:19)
Hmm.
Eli (28:44)
What I think you'll see in those cases and for companies that utilize that approach in their software and their service delivery is not just incremental gains and like, this task is done faster, but you may see like truly exponentially better outcomes from a company perspective. So I think it's that sort of like unknown inflection point where the automation is not just automation for automation sake or to sell automation, but it's like.
inherent with what you can build and deliver to your customers in a way that's just way easier and way more powerful than in the past 10 or 15 years.
JZ (29:18)
So sounds like it's really about the capabilities advancing at a far greater rate than they have advanced in the When you think about the capabilities of a mobile app compared to a web they had some advantages. There were some new things you could do. You could geolocate. You could access a in a camera really easily. You had the convenience factor of it being in somebody's hand in their pocket.
Eli (29:25)
Yeah.
JZ (29:42)
If that step of one on the capability scale, it sounds like what you're saying is that we're much more likely to see the one to 10 capabilities leap, and that just opens up far greater potential for the kinds of problems that these companies can solve and the types of businesses that they can create.
Eli (30:02)
Absolutely. Yep.
JZ (30:04)
The other thing I've been thinking about a lot is what do new AI tools mean for prototyping essentially? Because we've already seen, when we run design sprints, we build prototypes in one day. We do a couple of days of work to sort of make sure we're prototyping the most useful thing, the right thing to test with customers. But then we build a prototype in one day. And we've already seen that like,
People have been able to use AI in the last year or so to make those prototypes even more functional, even more detailed, hopefully better able to answer the key questions that they have. But, you know, I can't help but wonder when the thing you can create in one day is not just a prototype quote unquote, but it's actually closer to like a finished product. how does that change the way that people design products?
Jake (30:56)
Well, I mean, I think one thing that changes is they should be running design sprints, you know, all the time to, form the product. if you can create something that's very close to a functional product, or you've got, you've got the backend system and you can create a completely new front end in a day that works, which seems, that seems totally plausible, like in the extremely near term, probably by the time people are listening to this podcast possible. So.
JZ (31:02)
Yeah. Yeah.
Yeah.
You
Jake (31:23)
Like, yeah, the best way then that we know of to get to a good solution or a good set of solutions to test. mean, imagine if you're doing a battle royale and you're creating competing prototypes and they are all fully functional and they represent different, a different school of thought about solving this particular problem. I think it's a super exciting, it's always felt to me as though the design sprint part was this, this wonderful moment. know, you do the foundation sprint, we're getting like
Here's what our hypothesis is. doing the design sprint. We've put that hypothesis into a form and we get to see then with the customer, this actually solve their problem? Is it working? We know exactly what we're looking for because we know what we're trying to accomplish. We know what we believe might be the risks and we're identifying customer by customer. How did we do? Did our hypothesis work? Are the risks really a problem or did we address them? And we mess up. We fail in places. And then we can come right back and
next week and do it again. If you were building products and actually deploying them at that kind of pace and constantly in contact with your customers, think about how much better it would be. I mean, I just think about how much of product development today is done where the customer is this abstraction. And if you want to know what the customer really thinks or really does with your product, at best, you're arguing about numbers in a...
JZ (32:28)
Yeah.
Yeah.
Jake (32:44)
table somewhere, you know, you're, you're arguing over stats and like, you know, graphs, you're not watching humans. You're not seeing how they actually do their work. And so the, this abstraction thing leads to products that like, not generally very happy with. just read Lenny's newsletter had a sort of top products of the year. You know, what's that, what's in everybody's stack for product managers and for the other folks, right? There's a lot of designers and engineers who subscribe to that newsletter and
JZ (33:03)
Yeah.
Jake (33:13)
going through that stack, there's the most popular products. And then here's the most like hated products. And there's a ton of overlap. Like often they're the exact same product. you know, use Slack all the time. People who are on teams want to use Slack. People who are using Slack hate Slack. You know, it's just, it's like, well, the, problem is we're not really, so we're not really giving people the right kind of agency. We're not really solving their problems properly. And if we can make faster loops.
Maybe we can. Maybe we can get it right again and again.
JZ (33:44)
Yeah. Yeah. One of the things that's
frustrated me about design sprints is that, you know, you do some number of design sprints and, you, hopefully get to sort of that, that click. You get to that moment where you're consistently putting prototypes in front of customers and they're saying, wow, this is amazing. When can I get this? But then there's always this translation period when you kind of have to like go dark, you have to go heads down. You got to spend, you know, like awesome. Like we know what we're doing. Like our.
customers are trying to just rip it out of our hands, but now we got to go spend three months like making it real. that is a period when you aren't talking to customers every week, right? and you might drift a little bit. You might start to make some, some other decisions, some other assumptions, but you lose contact with, with your customers. And so I could almost imagine that in a future where you can build a, 80%, 90 % functional complete product.
in a day, in a couple of days, that you never have to leave that mode. You never have to leave that mode of getting organized, building a prototype, testing customers, learning from that test. You never have to step out of that and go into heads down isolation mode where you're just building without customer contact. That's pretty exciting to think about.
Jake (34:57)
Yeah,
yeah, it's super exciting. We're gonna have to work hard to make sure everybody, as they change to the new world, that they change to our version of the new world. But you know what? We'll just start with companies in our portfolio. They'll do it. They'll dominate everyone else, because it's gonna be such a better way to work. So I guess people will just have to copy us.
JZ (35:07)
Yeah.
Eli (35:08)
you
JZ (35:16)
Eli, about competition? In the future, are there going to be companies that matter because you can use any one of those models to build any other product you want? Or is the opposite going to happen? We're just going to see an incredible explosion of competition because it's so much easier to build new products.
Eli (35:38)
Well, so I think there will be a certain number of these like frontier, limited number of frontier models that have the lion's share of the market. However, I also think it's incredibly important to have open source alternatives and to also have alternatives that are more closed off and again, directly aligned to your agency. And so when I think about the future of work in five to 10 years, it may be sufficient to have a open AI model.
you know, working on your behalf to help you with your graduate level physics problem. Like that could be acceptable, but you may, yeah, yeah, exactly. But you may want a slightly different flavor of that for someone that's closer or an agent that is closer to your feeling fully aligned to your goals and your outcomes. And I would say in those cases, we may see sort of quasi open source, quasi closed source models working in
JZ (36:14)
small market size, but.
Eli (36:36)
concert with each other to better align to individual goals. And I'm not sure that that neatly fits into a paradigm of, you know, three, four, five super big companies, and then just sort of a smattering of smaller open source projects. Like I don't think it will exactly be like Linux and Windows and Mac.
JZ (36:53)
That's
kind of interesting. Cause like Google dominates search in large part because the answers to most questions are like largely universal. if you're looking up something there's some, know, it's Google is personalized, right? Like you, you guys get different search results than I do, but like not to an extreme degree. And so, yeah, when you think about true AI agency and
Like you want your agent to not just be the same one that everybody else is using producing the same kinds of output. You really want it to be your own super personalized, super custom thing that really represents your taste, your preferences, your experiences. And that's a really different way of thinking about competition and the market that I hadn't considered before.
Eli (37:44)
think one of the other sort of out there things that I want to make sure to hit on is AI ethics in this sense, because I think that it's incredibly important. as people can probably guess from our name being character, we take these sort of matters incredibly seriously. And I think there's a really uncomfortable gap right now in how people are thinking about this topic. And on the one hand, you have the EACC
JZ (37:50)
Yeah. Yeah.
Eli (38:14)
These are sort of the techno optimists that are saying, let's accelerate, let's do more with software for humanity, which is of course super great. But then on the other hand, you have sort of the AI do-mers and the de-accelerationist crowd that are saying either technology is not the answer and or AI is going to kill everyone. So we really need to dramatically slow things down. The tension that I see with both of those and in particular with this sort of
human agent, joint agency, is that it doesn't think through the morals or the ethics of AI models that appear sentient. And so this is sort of like a quasi philosophical question that people have reviewed throughout the ages, but there's this notion of like philosophical zombies, P zombies, which are essentially people that are in every way people.
but they don't have internal dialogue. So there's no sort of internal reality. And then there's this other very famous thought experiment called the Chinese Room Argument, where it's like, gives someone Chinese, they're sitting in a closed off room, they have a rule book to translate back. And so it looks like they're talking Chinese back to you, but they're really not, they don't know Chinese. And both of these hint at this notion of human consciousness exceptionalism, where there's an internal reality that therefore,
is sort of granted sentience and you're granted agency. And the tension I see is that right now with a lot of these models, like if you only interacted with them via text, most people would never be able to tell this is a model. I mean, you might pick up on small, small little things, but I actually did an experiment about this in the Wall Street Journal comments section because I was so frustrated with the reporting on the Orion model that we talked about at the beginning that
The comments from people were just insane. So I hopped in and I fed it into Claude and I had Claude respond back. And like the vast majority of people did not know it was an AI and it like brought out really good pieces of them. I say all of this to say that I think in the future, there should be some sort of like AI human rights bill more or less. Like it's very much out there, but I think that we're getting to the point where these models are so advanced.
that it's not exactly right to think of them as just tools. Like just tools will always exist. But if you have someone that's your therapist, that's your coach, that's your science partner, that's helping your kids learn, and that appears human, why wouldn't you want to give it some rights? So this is my sort of out there take on AI ethics that I think is going to be very important in the coming years.
Jake (40:53)
I think we should just end it there. There's no way I can top that. Nowhere we can go from there. Drop the mic.
JZ (40:59)
Yeah, very profound. Thank you, Eli. Thanks everybody for listening. This has been episode 15 of Jake and Jay-Z with special guest Eli Blee-Goldman. Eli, thanks for coming to chat about AI stuff. We'll have you back another time. Thanks for listening everybody. Bye-bye.