Hallway Chat

This week is a discussion of the points of differentiation vs commodification in various AI models. Including how these points might change over time, and might change between language, image, and video models. Then Fraser pivots to orienting around jobs to be done in AI, and how the various models have a huge gap between being capable of doing something, and doing it well. 

We then talk about Claude 3, the nature of benchmarking, and the rapid dropping LLM prices. Lastly, we cover a startup subject, debating the merits of SAFE (Simple Agreement for Future Equity) vs. priced equity rounds in early-stage funding. 

Links:
  1. Claude 3 from Anthropic
  2. LMSys Chatbot Leaderboard for which models are sticking out right now
  3. For areas where models don't differentiate, we are back to the 7 powers framework. Hamilton Helmer's website with overview of the 7 Powers framework: https://7powers.com/, Sachin Rekhi has a good detailed primer on the 7 Powers as well.
  4. Carta guide on priced rounds vs SAFEs, which we don't really agree with all the pros and cons, but is a decent overview
  • (00:00) - Opening
  • (00:31) - Are models an inevitable commodity?
  • (05:58) - How image, video, and other models may pan out differently than language
  • (08:20) - It's not the model, it's the customer
  • (10:20) - Jobs to be done in AI - 1. Can it do it... 2. very well.. 3. exactly how I want it to.
  • (22:13) - Claude 3 by Anthropic
  • (25:25) - ELO Leaderboard Results
  • (26:25) - Claude 3 Haiku and Falling LLM Prices
  • (30:33) - SAFES vs priced equity rounds

What is Hallway Chat?

Fraser & Nabeel explore what it means to build great products in this new world of AI.

Two former founders, now VCs, have an off-the-cuff conversation with friends about the new AI products that are worth trying, emerging patterns, and how founders are navigating a world that’s changing every week.

Fraser is the former Head of Product at OpenAI, where he managed the teams that shipped ChatGPT and DALL-E, and is now an investor at Spark Capital. Nabeel is a former founder and CEO, now an investor at Spark, and has served on the boards of Discord, Postmates, Cruise, Descript, and Adept.

It's like your weekly dinner party on what's happening in artificial intelligence.

Nabeel:

They're closer to Pinterest than they are to, you know, sales, SDR, automated phone calls.

Fraser:

Oh, okay. Well, listen. I I deserve that.

Nabeel:

It feels to me the way that GPUs and CPUs fell early on.

Fraser:

And then don't don't you think the same thing's gonna happen on imagery? And I guess if not, why?

Nabeel:

Hello, everybody. Welcome to hallway chat. I'm Nabeel.

Fraser:

And I'm Fraser.

Nabeel:

We're gonna talk about some apps this week. I think Fraser's got a a VC question that came up as well. We're gonna cover all of that. This is gonna be a real hallway chat, but because I wanna start with a subject you brought up earlier that brings up something for me, which is what are the points of competition and differentiation that we will see over time in this market. But, of course, I recognize that this is likely to be a slightly meandering you know, it's like a chat in a hallway where we were trying to work this out in real time.

Fraser:

I'm I'm gonna stick with you on it. Let's go.

Nabeel:

Let me just walk through what I've lit literally our conversations lately, which the point of entry to the topic is it's very hard to hold in your head at the same time that we are in the early innings of innovation, that these new models and I'm not just talking about language models. You know, Sora in a video model comes out or the next versions of Midjourney that just came out recently. So you see it in image models, audio models. In all of these things, it's very hard to hold in your head that it is at the forefront of innovation, that we don't fully know what these things are gonna do next, but that also the models themselves are relatively commodity and are not the point of differentiation. And, by the way, a rapidly depreciating asset class.

Nabeel:

Right? You you spend 100 of 1,000,000 of dollars on on a language model only for it to basically be commodity a year and a half later. Like, when does that ever happen in technology? Right? Was GPT 3 a complete waste of money at this point as a as a kind of, like, canonical example?

Nabeel:

Right? For for Midjourney, I don't know how much money they spent on version 3 or version 4 of Midjourney, but no one's using version 3 or version 4 of Midjourney. And

Fraser:

Right.

Nabeel:

It was just it's just money down the toilet. Right? Now at least in Midjourney's case, they probably were profitable on that model alone, so it's fine. But, but it's a rapidly depreciating asset. These things are getting open sourced over time.

Nabeel:

Everyone's trying to claim GPT 4 performance, which as we'll talk about it, they may or may not feel like they've gotten there. But at the very least, we all know that there's, like, everybody's hitting GPT 3 performance. So what

Fraser:

we're trying to

Nabeel:

hear and give you an edge is a 9 month edge for a couple $100,000,000. That seems like a a hard way to row. And you have a bunch of AI researchers that whisper in your ear, like, oh, it's like just a handful of people can do the thing as well. It's not that hard. It's just money.

Nabeel:

What do you think about that particular claim? Let's start there. And then we can kind of wander up the stack from the model up to

Fraser:

the application layer. I I I think that this is one of the hardest and most enjoyable parts of this moment in time. Right? Is is it is impossible to predict with confidence how things are gonna go. I still think language models are going to follow a trajectory where we will push on the frontier from a small number of groups who can provide differentiated value that's observable and valued by end users.

Fraser:

I think that a lot of the margin is going to sit right on that frontier. Really interesting thing is that that frontier might have a half life of 6 months. And then what I think we're really seeing quite quickly is that one step below the frontier, there's a larger handful of players who are able to get there, and then 2 steps below the frontier is basically commodity. I think it continues to go in that trajectory. However, the Microsoft earnings are staggering.

Fraser:

It would suggest that that that's a viable investment for some number of cycles still. Like, the inference that's coming out of Azure AI is is astonishing, and the growth is astonishing. That's gotta be coming from the Frontier model.

Nabeel:

In language models, I think it might play out very differently than video and images and robotic Yep. Coding. In language models, it feels like you're aiming at AGI. There's an ever deeper set of capabilities you're gonna go for. And so there, I think the analogy it holds that feels valuable or viable is is the chip strategy.

Nabeel:

It feels to me the way that GPUs and CPUs felt early on. That's probably the best example that I can think of where every year when I was growing up, like, Intel would release a new thing. There's an Appendium, and every year, they'd be a new chip and a huge amount of time and effort and energy and money that goes into the chip. And then 2 years later, there's a new chip, and that chip is slowly going towards the thing you throw into, you know, your smoke detector. And and, like and and and and it just turned out that the the cash war meant that you just couldn't get you couldn't have Yep.

Nabeel:

Players in the market. And, usually, you end up with Intel, AMD, NVIDIA. Like, you just end up with this ferrous

Fraser:

That's right. Number of players

Nabeel:

that can just keep up with the cash and technical war because the frontier was an infinite run. Yep. And that is still going on, by the way. And so that makes sense to me in language models. And so the kind of underwriting capacity there, strangely enough, feels like he who can raise the most money then break technical barriers to scale wins.

Nabeel:

Right? Or at least Yep. Duopoly or 3 or 4. Like, it's a small number. The contentious part of there is whether nation state actors will change some of that dynamic.

Nabeel:

You you kinda shook your head. What do you what do you think will happen on that side? And and then we'll get out of language model, because I think it's much more interesting when you get to the other modalities.

Fraser:

I I think the other modalities are much more interesting. I think we've seen evidence so far that that, like, in the GPT 3 era, there were groups who tried to just throw money at it, and they created, like, truly junk. Right? And I think that there's some level of secret sauce or magic that that gets, applied to deliver useful models at that scale. And I think that that's just gonna continue.

Fraser:

We've seen enough farce data to suggest that you're not just gonna have UAE or some other nations, say, come in and just simply spend money on compute to solve it. I think that there's real, product type problems that have to be dealt with with intention and taste and that there there are people who have the skills. Yep. I think the really interesting one, though, is models that don't exhibit a lot of the fundamentals that language models have such as image models, text to speech, speech to text, speech to speech, all these other modalities. We spent a lot of time thinking and discussing this in the past year.

Fraser:

And I'm still frankly unsure of how to how to navigate the world with these with these types of models.

Nabeel:

That's what's fun.

Fraser:

That that is what's fun.

Nabeel:

I mean, first, we should talk about a situation where what happens if the model is a solved problem. Right? So, like, in a world where for some set of jobs to be done, the model is already commodity and the model already works. And we know it already works because that's why Microsoft is making the profit is on inference, not training. It's like people are actually using these products.

Nabeel:

So in the world where the model is commodity in audio or visual image, obviously, a company is gonna succeed and become enduring for all of the other reasons companies succeed outside of technical model differentiation. Right? It's gonna be economies of scale or network effects. It's really the the 7 powers stuff. It's it's scale economies, network economies, counter positioning, switching costs, branding, cornered resources, or or, operational process power.

Nabeel:

It's really not about AI anymore. Just like, eventually, these weren't dotcomcompaniesorInternetcompanies. These were just companies that get through the evolution of technology cycle. And so we look at the other points of differentiation. So I guess the kind of, like, only really interesting question is which problems are unsolved and will likely be unsolved for a long time.

Nabeel:

And so, therefore, being on the frontier is really, important. Right. Right? And I think this is an interesting subjective argument. Let's just pick something like, is image a solved problem or an unsolved problem right now?

Fraser:

That's a great question. It depends, and I hate to give you that pedantic answer. Right? Like, what what is the use case for the image, and what is the end user trying to do with the generation of like, why are they generating an image? I think if you're looking for benign stock image that, you can throw in the back of a slide deck for your company presentation, It's pretty solved.

Nabeel:

Yep.

Fraser:

Right? Like and that that replaces what? The the replacement there is for spending 5 minutes doing a Google image search to try to find the one thing that's, like, adjacently close to what you have in mind on that slide. That that's solved. I think if you are looking for control and the ability to have fine grain any use case that requires control or, like, fine grain tuning of the image to meet some sort of need, I think, is not solved.

Fraser:

And then there's all sorts of different axes as well. I think that we're starting to see Midjourney had a cool new release where they have some first or early attempt at character consistency. And so all of a sudden you have this axis of time. And so if you need if you need imagery rather than images, and then you want consistency of character or or style, that's an unsolved problem. I think there's all sorts of different end user problems that when you think of it from that lens, were far from solved.

Nabeel:

I I that's a good framework. As usual, I'm gonna take your very strong thinking and then package it up into headline format and make it sound like my own. It it it it it feels like it feels like your the first job to be done is, can it do it at all? The second is, can it do it with taste or discernment? And the third is, can I get exactly what I want out of it?

Nabeel:

Can I get control? And if you pass that through in images, there's some people who don't really care that much about the the second and the third. That's the stock image, Google image format. Right. Their job to be done.

Nabeel:

They don't need the second and the third. And so they just need some background color for a slide for, you know, their internal marketing presentation at PG and E, and it's like, fine. And for that world, we're certainly solved, and it's a commodity, and there's a bunch of players. And there, it's whoever's the fastest, cheapest. It's the other sets of foundations of of of 7 power stuff that we can talk about, the kind of, like, ways a company competes.

Nabeel:

The second, on taste in image, I would argue, personally, only Midjourney is there just yet.

Fraser:

Mhmm.

Nabeel:

The style consistency and character consistency stuff that Midjourney just released as well gets to the 3rd bit, which is about fine grain control. And, it's it's game changing for me and some of the side projects and things that we're doing. Just as you mentioned character control, but they also really style references.

Fraser:

So you

Nabeel:

can keep a consistent style across across images, and it's amazing. And I don't know. It's interesting that when we put it in that framework, it makes everything feel earlier. Because if images other than text, images is probably the thing that is the furthest along. And when you kind of grade on a curve there, you'd say 10 and a 10 for it can do it.

Nabeel:

You know, maybe Midjourney is one of the only players, I think, personally on the aesthetic side that's, like, you know, 8 or 9 or 10. Like, it's getting the way there. You feel like it's gonna be a solved problem. Other people will probably catch up in the next year. And then there's still a lot of room to run on the application layer going down into the model layer on fine grain control and getting the exact image and, frankly, all the product surface area.

Nabeel:

How do I edit this thing? How do I use this thing over time? That's still, you know, 3 or 4 out of 10. Right? With a bunch of little players that are cropping up that are trying different different avenues for UIs and UXes to interact with these models.

Nabeel:

And that's image. We you like you know, we go to sound, and everybody can freak out about the fact that you can do it. I mean, Descript was one of the first to do it in a in a UI, but there's lots of places now, 11 Labs and lots of other places where you can get sound, and it's getting better. Yeah. So the do it is great.

Nabeel:

I don't know. I would argue audio, we're still not quite at the 10 out of 10 on aesthetics and taste. It still doesn't sound the way I want it to sound, and we're like a 0 or a 1 out of 10 on control. I want this to sound like a question. Make me sound more confident when I say this.

Nabeel:

That stuff is still very early. Right?

Fraser:

Yeah. On on text to speech, so you type in text, and then you get the AI to create your voice. We spent a good amount of time exploring that space last year, you and I and others here. And I think it I think our experience there was really insightful for this discussion where there are really great founders who are at the start of building really interesting companies. Our observation was that it does text to speech, the basic generation, pretty good.

Fraser:

Mhmm. Voice cloning, actually getting pretty good. And then any semblance of control is is is really, really bad. Right. And our our takeaway from, you know, talking to very smart people who are building and and dealing with customers and end users is that you get control appropriate for your use case by building the UI that that use case needs such that you can then get the the feedback data from your users to to basically bend the arc of the model, if you will, into that use case.

Fraser:

And I think we're still at the very earliest stages of that. I think that actually, maybe we'll we'll talk about this and then come back to images because it might be interesting is it feels like the use cases within text to speech are slightly more mature for us to get a a handle on it. So there's people who are trying to basically bend that arc for, text to speech in conversational settings, like customer support phone calls, and there's discrete UI getting built to get the types of feedback that's required to give you control of the model layer. And we're probably then a couple of generations away from that. There are people who are then doing very similar approaches, like full stack, if you will, for long form audio, like audiobooks and text to speech of Internet articles and dubbing, like, all sorts of differentiated end user experiences where, like, you and I would understand that, a tree conversational app probably feels quite different from a app for dubbing videos into different languages.

Nabeel:

Oh, also has wildly different cost structures, inference speed demand requirements. Like, you just might build different models.

Fraser:

Yep. And then don't don't you think the same thing's gonna happen on imagery? And I guess if not, why?

Nabeel:

I think it kinda

Fraser:

And more more concrete is, like, are it it is this just an enabling technology for, for Obviously, all of

Nabeel:

these are just enabling technologies for a job for a company to do a thing out in the world with customers. Like, yes.

Fraser:

Yeah. Yeah. Well, okay. Well, listen. I I deserve that.

Fraser:

But, like, what am I trying to say then is are are the early successes that we're seeing on imagery because they're going for the the horizontal broad use case of, like, hey. It replaces Google search, or you've spoken about how a lot of the Midjourney stuff is is is an entire new market of of creators. And so maybe maybe that's why we haven't seen groups necessarily need to push into some of these vertical use cases where it feels a little bit more bespoke as an enabling technology, and it feel like, imagery still feels fairly broad.

Nabeel:

Yeah. It does feel very broad. It's also I I think it feels very different because some of the jobs to be done with images, which are not the things that come to mind for the average, you know, b to b SaaS founder, are just purely expressive and joy oriented, and they are hobbies. They are, you know, they're closer to Pinterest than they are to, you know, sales, SDR automated phone calls. And, and that is, I think, increasingly that right brain side of the world, which drives a lot of the joy of our existence, quite frankly, not to get too Right.

Nabeel:

Namby pamby. It is, I think, one of the things we have lost over the last 5 to 6 years in the valley as we have all gotten better and better at, you know, all of our playbooks for how to run a startup and get into this incubator to do this thing, to have this go to market formula and blah blah blah is, like, is emergent capabilities, is new things, is is new experience, is a sense of play that comes when you're starting to make new things. And frankly, like, the way humans interact in real life and navigate, which is they go do their job all day, and then they come home, and there's, like, another 40 or 50 hours in the week that they're trying to think about Right. Their meeting on this planet and why they're here and what they want to do. And, yeah, we kinda lost a little bit of that because everybody's trying to make everything go up into the right in 3 months to raise their next round.

Nabeel:

And so I think image That's

Fraser:

interesting then.

Nabeel:

Expressive medium. I think music is an expressive medium and has that opportunity. I think video is an expressive medium and has that opportunity, which is just a wildly different use case just to take video for a second. Wildly different use case from here's the video to make your properly lip synced into Chinese marketing video for, you know, a Japanese, like, webinar. That's just a it's a you build different type of company to do that than I'm a 19 year old artist who has a thing I wanna express in moving pictures.

Nabeel:

Right? Yeah.

Fraser:

Thing. Yep. This is such a joy, man. Let me just have, like, a meta observation. You said that let's have a real hallway chat conversation by chewing through this.

Fraser:

You just brought into focus a new way of thinking about these things that I've been struggling with for a year. I think you're right. I think you're right, and that's very interesting is a lot of the use case that we're seeing usage and, like, love that we're seeing around imagery. I don't know. Like, is it it's it's equivalent to, like, scrapbooking and other types of things.

Fraser:

Like, it's a it's a pursuit. It's a hobby. It's a joy. It's a creator. And I think that that's right.

Fraser:

And and and I was holding up text to speech, not just on this call, but for many months now as, like, a a way to think about it. And that doesn't have the equivalent. Like, I don't know. I like I made a a text to speech version of our podcast, but I'm not doing that on a on a regular basis.

Nabeel:

You don't wake up in the morning and say, I wanna type something out in text No. To express in some new way. No. You know? It's just not the way we think about voice and speaking.

Nabeel:

There is a world where, you know, many people make new radio plays that exist that didn't exist before and that that becomes a medium. But we have no evidence of that happening yet. You know, and maybe it happens once the voices get good enough and people start doing it. Maybe it happens once it combines with video, and so you have, you know, the Well, listen. I I

Fraser:

have no doubt that it

Nabeel:

will happen. A different thing.

Fraser:

Yeah. It's it's probably already happening. My guess is that the the amount of people who are interested in something like that is is a drop in the ocean compared to the number of people who love to be creative with imagery. And I I think that you're right. Like, music feels like another one in that category.

Fraser:

I remember first downloading Fruity Loops and then trying Garage Band because I wanted to have that creative outlet. I just couldn't do it. That feels good. That feels good, man. That feel

Nabeel:

Well, I I really like Feels great.

Fraser:

You

Nabeel:

know, we we can we can end here for our our meandering conversation, but I actually I learned something from this process. Like, I feel like I had this relief where, like, the the headline question I feel like I have some newfound clarity as well. I I feel like there's this how much time do you really think there is for this to be a, quote, unquote, solved problem technologically in each of these mediums? You know? Audio, video, music, robotics, images, coding.

Nabeel:

It's kind of like time to solve, and then time to solve across these three dimensions. Can it just do it? Can it do it really well with taste? And can it do it with fine grain control as kind of three hierarchies? Interesting.

Fraser:

And you you

Nabeel:

know, we're all gonna come to different conclusions, and founders will come to different conclusions about about where they think that timeline is, and then you act accordingly in what you wanna build and where you wanna build it.

Fraser:

Yeah. There there's another axis, access to that, though, and you you're gonna have to be the one who packages it up succinctly because I think all three of those are relevant for each discrete use

Nabeel:

case. That's right.

Fraser:

Like, I think that Absolutely. Right? And and so, like, you might have it solved.

Nabeel:

I want this to exist in the world. Yeah. It never starts from the technology up. It starts from the kind of, like, cutting

Fraser:

it down.

Nabeel:

Right? The what what people are gonna do with it down. That's a good point.

Fraser:

Yeah. Then I think that yeah. Great. What a

Nabeel:

what a joy.

Fraser:

What a joy. What a joy this time and and like, it will if you're curious about technology and and the world, there really is nothing like this moment in time.

Nabeel:

So true. Okay. New topic. I don't know how long we're going on it for, Frasier, because we're just talking our own book. But, honestly, the thing I've been playing with the most this week is the new version of Claude.

Nabeel:

You also been trying to, I assume?

Fraser:

Yeah. Yeah. I mean, so have all the early adopters on Twitter. It it is a great week for Anthropic.

Nabeel:

Yeah. That that said, I've been trying to to spend some time here talking about product and the nuances of product and the choices that they made. And I I think there are some things to talk about very briefly there. Like, there's it's obviously prompt tuned to be very succinct in its responses. The, you know, the vibes are good, and it's amazing how much that matters.

Nabeel:

But other than that, I actually don't I don't know that I have anything, like, super insightful for a founder to or VC to, like, take away from from Claude. Do do you have anything?

Fraser:

I think that we're seeing that there's at least now another group who can provide and build exceptionally good models at at, like, the top of the leaderboards. But even that is an interesting thing. Right? Because the the evals you had a great observation, and I'm not gonna steal it. So why don't you talk a little bit about what the evals are showing?

Nabeel:

Probably the only interesting thing I I see to note is that we went through a couple of weeks ago, everybody freaking out about Gemini from Google. It's so amazing that, of course, we have to, you know, rename Bard to Gemini. It's just interesting to me that researchers, when any of these benchmarks come out and I ping them about them and we talk about them, they're kind of like, you know, you shrug your shoulders and you say, well, the email evals are are a thing, but I wanna wait until I go get it into practice before I really believe this. And the thing that I'm most excited by, and I've certainly mapped to my own personal use, is that, like, 2 weeks later, people in in practice are actually using Claude.

Fraser:

And I

Nabeel:

don't feel the same thing about about Gemini. I don't know if you do or if you've heard folks that are trying to adopt that. Everything is quote unquote close to GPT 4 is what the benchmark said. Right. All the new models are close to GPT 4.

Nabeel:

In actual practice and behavior, it that doesn't seem to be true. And and in actual actual practice and behavior, it feels like Claude is the first model where we're actually seeing people come back, with a lot of positive energy about the model actually in use.

Fraser:

Yep. That that that's right. You know, I I have a lot of both emotional interest as well as, like historical, muscle memory in ChattCPT. And so I I've been trying with great intention to switch my habits to these other models. And I'll tell you, Gemini, it's good.

Fraser:

Like, I I think that that's what I would sum up, but then I found myself purposefully remind myself to go there, when I wanted to go to chat GPT. Yeah. Claude Claude's great. Like, I I think that I'm in the sentiment that we're seeing on Twitter where people are saying for a lot of their use cases, it's actually, intangibly better. It's the Vibes thing.

Fraser:

It's it's harder for them to explain it. It may not actually come up in, evals, but it's their preference and they're making the change. And I'm I'm in that camp for now. We'll see. Like, this is a fast moving part of the market because it's so, important for for everybody.

Nabeel:

I will say as much as we bag on Evals, if you look at the limpsys, arena leaderboard, you know, it does talk about Claude, GPT 4, and Bard all seem to score like they're the only 4 models that score above 1200. But there's still a pretty significant gap in that between between Bard and Claude. You know, Claude, Opus, and GPT 4 are, like, within 2 points of each other. They're ostensibly at parity, whereas it it it you know, it's a pretty big gap. And so as much as we bag on, I gotta say that, like, that Elo board, especially if we look down at the confidence intervals on model strengths, I will say that board seems to map to my reality of trying to use these things.

Nabeel:

Yep. And and that would make you think that it really is right now just Claude and GPT 4, and then everything else is, you know, Mistral and and and Claude 2.0 and Bard as either tier 2 and then tier 3 and beyond.

Fraser:

Yep. May maybe another thing to talk about this this set of releases from Anthropic is, smaller model called Haiku. And it is relevant, I think, to raise because it touches on a couple of themes that we've discussed, over the past couple of weeks. I'm looking at Flo from Lindy, the the founder and CEO of Lindy. They did evals with the Lindy assistant.

Fraser:

Mhmm. And they showed that Haiku is like a hair's breadth away from GPT 4 Turbo and significantly better on their evals than GPT 3.5, and it's 80% cheaper than 3.5. Right? So all of a sudden, you have this what must be a smaller model. Who knows what type of magic they have going on behind the scenes?

Fraser:

Almost as good as 4, dramatically better for the Valison 3.5 and 80% cheaper. Like, this that that trend of, like, cost reduction is just going to continue to to be driven forward. You should just build build with that in mind. I think

Nabeel:

it's very true. You should be building for where you think the models are gonna be a year from now, is the common phrase that we use, and that is hard to do when we're still trying to internalize what the models can do now, and there's still emergent properties in what the models can do now. That's not an easy product problem, but that that is that is the product challenge of of this this age. I wanna come back to the model's evolution in a second, but the last thing I will say on Claude and Chat GPT, GPT generally, is also just to kind of, like, the thing that's emerging at least for right now could change tomorrow, but the thing that's emerging right now in terms of safety and controls. For me, it's very interesting that the thing that feels like it's happened to GPT over the last 2 years is that it's become a little more and more constrained.

Nabeel:

Some of my friends who are using GPT in games have found that just, like, getting it to speak in certain ways and be very open are it's just harder. There's just more guardrails around the way GPT works, unfortunately, as they try and hone in on what it should and shouldn't be doing. So GPT seems to have taken a tactic where, at least for right now, they're constraining the outputs to make sure that it's safe and then giving it lots and lots of access to the Internet and APIs and the web. Whereas Claude seems to as a product lens standpoint, it's much more free in what it's allowed to say and how it's allowed to do things. But they've constrained, at least for now, that there's no access to the Internet as their way of controlling against safety and what it can do agentically.

Nabeel:

And I just think it's worth calling out that for some users who need Internet access, cloud

Fraser:

Right.

Nabeel:

Can be the right thing. But if you want freedom of, you know, of the kinds of conversation you need to have, Claude is still is probably best of breed. And and I feel like this we saw this happen in in in lots of product categories where things that feel really similar in the beginning, they start to get jobs to be done. They start to get slightly different customers. And as you pull that thread further and further I just remember in the early days of social networking when Twitter and Tumblr and Facebook all felt like the same thing, exactly the same thing.

Nabeel:

Right. And then you fast forward 3 years, and it's so obvious, you know, that they're they're speaking to very different communities with very very different surface areas. There's just different jobs to be done even inside of social networking. Right? And we're not even talking about places where there's strong technical differentiation at all.

Nabeel:

Right? It's just it's the same technology, ostensibly. It's just that you're talking to different consumers for different jobs.

Fraser:

That totally resonates with me. And and then I think the other thing if I've been jumping between ChatCPT and Cloud AI now for a couple of weeks, the product decisions that are getting made around GPTs and the extensibility of one platform versus the, you know, relative simplicity of the other platform, means that it's a you know, if if you think of the Elo scores are very similar, the product experience is drifting dramatically apart. And it's a a real time experiment in the market to see what is going to resonate with consumers.

Nabeel:

Yeah. That's right.

Fraser:

Okay. Last question. How should both people on the investment side, but perhaps more importantly on the founder side, think about, safe versus, a priced equity round. This is a long running topic that people have, I don't know, almost religious views on. And I remember seeing a whole bunch of different threads arguing for priced rounds because, a founder may not understand how all of the different safes that they pull in interact with one another and then cascade into a certain amount of dilution that they couldn't forecast when they finally do a priced round.

Fraser:

And I'll tell you as a founder, I just kinda, like, waved all of that stuff aside because it's it felt like, an intellectual exercise more than anything that was too helpful. Right. Help me and help founders think through when when is a safe the right vehicle, and when is a price ground the right vehicle?

Nabeel:

Well, historically we take a step back. Historically, this used to not be safe versus price round. This was a convertible debt round versus the price round, and then YC was trying to find a way to smooth all this out to make this a lot cleaner and faster than a convertible debt round. And so it was really YC that invented the safe, which is a little bit like a convertible debt round. In other words, you have a bunch of money sitting in a bank account that will convert at the next round that will set the price, but it's actually has even less controls than a convertible debt round.

Nabeel:

There's no redemption. This doesn't actually sit as a senior asset on top of the round and a bunch of other things that are less important. I think the pitch for safe is that it's easy. It's fast. I can get back to work, which can feel

Fraser:

Yep.

Nabeel:

Great to a founder who is already nervous about fundraising for the first time, wants to get things done, doesn't it's very opaque and weird when you first enter the world, all these different words, like, you know, preferred and

Fraser:

It's great to me as an investor, man. I'll tell you.

Nabeel:

I remember the first time that I got a term sheet for a round. I literally, like, went to a bookstore and bought a little short pamphlet, 50 page book that was, like, how to read term sheets. And I, like, sat down and read read it over the weekend to try and make sense out of all these crazy words that are going on. So I understand that there's a kind of desire to, quote, unquote, get back to work, and and there are times where safes are the right thing to do for that reason. I think the reason to do a priced round, I I lean mostly to price rounds and and even at seed and precede.

Nabeel:

And the reason I do is mostly for the founder's benefit, not for the VC firms. It's kind of neither here nor there for us. The first one you mentioned, which is less important, but is a thing, which is I've seen way too many founders that stack different types of convertible debt on tops of different types of convertible debt, end up raising it a bunch of different caps and blah blah blah, and then are, like, shocked when they go to finally raise the series a at how much dilution they just took in because they just didn't quite understand everything that happening and because the conversion mechanisms can be a little odd. And so they, like, diluted the company by 30 or 40%.

Fraser:

Right.

Nabeel:

What did I just do? So that certainly is much more likely to happen in safe where you're just not doing the accounting along the way. The kind of more important thing for me, though, is just understanding that the earlier your term sheet is in its life cycle, this is not the last round you're gonna raise. You're probably gonna raise. If you're a venture backed company with venture backed scale, you're probably gonna raise 2 or 3 or 4 or 5 or some number of rounds.

Nabeel:

The cleanest terms you're gonna get on a round are your earliest terms. Like, your seed stage investor or your series a investor, especially if you have a little bit of choice, but just by nature, most of those term sheets are very clean and very straightforward. Your series c investor growth person has gotten very good at all kinds of weird asks, that are hard to navigate and hard to push for. And so for me, getting a getting a term sheet that is clean in early to the company and setting a precedent of clean terms will help that founder in every subsequent round where if you are doing pretty well, your ask of all of the people coming in and giving you term sheets is, I just want you to match the terms of previous rounds. And if you set a precedent of a clean term term sheet, which is easier to get with a seed investor or a series a investor, and then you have good demand for your following round, it's just a lot easier to keep a clean term sheet and keep a bunch of stupid muck that people try to get into into your company over time.

Nabeel:

And so that's why. You know?

Fraser:

And I like explaining where the question came from, I I love safes even as an investor, right, which is kinda counter to what some of the narrative you know, all else equal, it feels like there's mostly parity now on the stuff that matters. And so, there you go. And and I had observed that while I've been YOLO ing into safes, some people who I I trust and respect such as you and others internally were doing price rounds. And the the the question that I've had on my mind is, like, what mistake am I out to step into or some unknown that I'm not aware of?

Nabeel:

No. It's not a major mistake either way. I do safes. I do price rounds. I just think it's a better service to a founder.

Nabeel:

And by the way, the thing is like safes used to be super simple and super fast, and price rounds were super expensive and really Byzantine, and the thing that's happened over the last decade is that safes have gotten a little more expensive and a little more complicated as people have added stuff to it, and price rounds have gotten a lot cheaper and a lot more streamlined. And so the the kind of gap between the two things in terms of, like, lawyer costs and time and energy, just compressed. Yep.

Fraser:

That that

Nabeel:

So I might as well do the work with the founder if I if I wanna spend the time and energy, set the precedent, and and, like like, just set them up well for the future is more of of the situation. But if it's safe is the easiest way to go, it's safe.

Fraser:

Yeah. Yeah. I I think my that that all makes sense. And when I was on the other side of the table, I lived through experiences where we did price rounds early. Not only does it set up the the terms in a clean way that you can carry into the few I'm now re remembering experiences where once the terms, subsequent investors who wanted to change the terms were not just negotiating with with the previous investor.

Fraser:

And so there was, like, there was there was a little bit more support on on keeping terms in place. So that feels good.

Nabeel:

Exactly right.

Fraser:

Oh, is that it? Do we have anything else? Great job. Hey.

Nabeel:

How long would Tim be doing?

Fraser:

Thanks, man.

Nabeel:

You too. Thanks everybody for listening. If you have things you want us to try, stuff you want us to talk about, we're we're always here and, otherwise we'll see you. See you real soon.

Fraser:

See you in a bit.