Fraser & Nabeel explore what it means to build great products in this new world of AI.
Two former founders, now VCs, have an off-the-cuff conversation with friends about the new AI products that are worth trying, emerging patterns, and how founders are navigating a world that’s changing every week.
Fraser is the former Head of Product at OpenAI, where he managed the teams that shipped ChatGPT and DALL-E, and is now an investor at Spark Capital. Nabeel is a former founder and CEO, now an investor at Spark, and has served on the boards of Discord, Postmates, Cruise, Descript, and Adept.
It's like your weekly dinner party on what's happening in artificial intelligence.
Coaching Up: When your AI should teach your users
===
[00:00:00] Intro
---
[00:00:00] Fraser Kelton: It's not that I'm not using AI products. I'm just not using a lot of AI features in the products that I've adapted to or adopted over the past decade.
[00:00:10] Nabeel Hyatt: This whole fixed ontology in a native age of unstructured AI, it just seems a bit
[00:00:15] Fraser Kelton: silly. I don't know how to describe however the tone is that they have with Claude when it told me not to use chicken broth.
[00:00:20] It did it like with a little personality and it built rapport.
[00:00:24] Nabeel Hyatt: But a word that I've been using a lot over the last year or two which is just what is the altitude of your editing. What is the altitude of your creation?
[00:00:32] Fraser Kelton: I think we are still sorting through the core experience and like the delightful experience that is like the first
[00:00:41] Nabeel Hyatt: step for this technology.
[00:00:42] So now they're asking the question, what happens when you come back the 50th time? Yep. And how do I expand your capabilities, help you become an expert in that thing?
[00:00:52] Welcome to Hallway Chat
---
[00:00:52] Nabeel Hyatt: Hey everybody. Welcome to Hallway Chat. I'm Nabeel. I'm Fraser. Welcome back. Good to see you. Welcome. Yeah. Nice to see you.
[00:01:00] Fraser Kelton: We're on the video, but it was nice seeing you in person at that offsite this week.
[00:01:03] That was good stuff. It was a lot of fun.
[00:01:05] What have incumbents really done with AI?
---
[00:01:05] Fraser Kelton: So Ravin, who's the CEO and co founder of Spark investment Credal.ai, pointed out something to me that I've then been chewing on throughout this week. And I want Share it with you and see what you come up with. Sure, man. He, he is doing sales into both scaling startups as well as enterprise companies, like Enterprise 500, and has a whole bunch of qualification calls that he's doing founder led sales for.
[00:01:29] And on all of these calls, he asks, what product that you purchased before 2023 do you use AI features in? Mm hmm. And he, he told me that the only answer he has ever received to that is github copilot and there's, there's no other. And presumably he's like hundreds of calls in. Yeah, yeah, yeah. And he, you know, he made it clear that it's not that, you know, it's not that there's a sporadic amount of low volume answers.
[00:02:04] It's nothing. It's nothing.
[00:02:06] Nabeel Hyatt: Yeah.
[00:02:06] Fraser Kelton: Well, what
[00:02:07] Nabeel Hyatt: is it for you, Fraser? Like, of all the incumbent companies you could use, we're using lots of these. Slack, email, Google Docs, Notion, Canva, Zoom. I'm just trying to be like, you know, off the top of my head, Jira, Adobe Creative Cloud, like, you know, off of all of those things.
[00:02:26] Yeah. Are you using any of the AI features in those
[00:02:28] Fraser Kelton: incumbents regularly? None, right? Like that, that's the surprising thing is that at first I pushed back on him and then I thought about it and he said, okay, which one for you? And all of the obvious ones that if you had asked me a year and a half ago, What are you going to be using AI in?
[00:02:45] I would have told you, Oh, one Slack has, you know, the ability to do channel summaries or synthesize all the content that you missed. I would have said that I don't look at those features at all. The Gmail like auto complete stuff or like write your emails. I just, I tried it once and it's not something that I've ever returned to.
[00:03:06] I don't use it in word. I certainly don't use it in zoom or Google meet. Or Google Sheets? Google Sheets? No, none of that. How about you?
[00:03:14] Nabeel Hyatt: Yeah, I mean the ones that feel the most mature and at least natural in implementation are Zoom Meeting Transcription and Adobe Creative Cloud. you know, I'm going to do fill, remove, like, you know, get this thing out of the background or using a diffusion model to help.
[00:03:36] Fraser Kelton: Yeah. Yeah.
[00:03:37] Nabeel Hyatt: I think Adobe definitely has a surface area, which makes sense to integrate diffusion models into zoom. Makes sense to integrate transcription into at the same time, like we made a counter bet in, in granola with zoom. And I feel very happy and confident about that. Like, I don't think, Raw notes just transcribed or even some kind of meeting bot summary of the Zoom meeting is the right product surface area that people want.
[00:04:07] And similarly with Adobe Creative Cloud, like I do want a little box when I'm in Photoshop to Maybe add a cloud or whatever, or do something simple, but that's entirely different from the net new interfaces that companies like MidDream and other ones will do. And I think both of those can coexist. And if you ask me, which one will people be using more in 10 years?
[00:04:30] I wouldn't pick Photoshop, right? No. No.
[00:04:34] Fraser Kelton: What about on the consumer side, Fraser? That's the beauty of Ravin's question, right?
[00:04:38] Nabeel Hyatt: Yeah, like on Spotify, there's a cool little AI DJ thing with the voice is kind of cool that speaks to you. So they're using like deep fake voices for a DJ that uses old ML algorithms to figure out what song to play next.
[00:04:52] It's been around forever. So kind of deep fake generative voices that works. That's something. YouTube transcription again, Snapchat. There's a bunch of these social things that they just shoved in, you know, ChatGPT into whatever social platform they're in. That's not interesting.
[00:05:08] Fraser Kelton: No, not at all. Like the Spotify thing is like, I just couldn't help but laugh.
[00:05:12] I've never even heard that they've done that. What? It sounds so ridiculous. Oh, it's quite, it's quite good. The
[00:05:17] Nabeel Hyatt: voice quality of the DJ that speaks to you every day is impeccable. And they actually modeled after a person at Spotify and his voice. Sure, sure. It's fine. But is it, you know, this goes back to that framework we had a while ago, which is just like, Maybe we need to, like, re look at our priors.
[00:05:35] You're a couple years in, and I remember everybody saying, you know, post gen AI, like, give everybody a year. They have to ship. It takes some time. But if we have this adaptation, evolution, revolution, like, buckets, which we talked about at length in a previous podcast episode, Like, yeah, I, I would, I have been readdressing my priors.
[00:05:58] Like, I figured the world would drop into 50 50, you know, or maybe a third, a third, a third, or something like that. But that incumbents would have lots of advantages. That would mean that small adaptations would have them win. Yep. And I think that's exactly right. Yeah, it's more in the latter half here.
[00:06:16] Like, I think we're going to get a lot more evolution, revolution. Maybe that's just us drinking our own, you know, Uh, VC Whiskey on wanting there to be startup advantages. But it's like you said, like, like Robin said, it bears out in the uses so far, which is like, I'm not using AI features in many. The Slack integration is not great.
[00:06:37] That's a very natural situation where you would think AI summarization, catch me up on everything I lost from yesterday, would be good. Of course. I did turn it on for our Slack. It was not good enough. It was not helpful.
[00:06:53] Fraser Kelton: No, it feels like an afterthought tacked on to the product. And I think that that's the really fascinating thing about the structure of Ravin's question.
[00:07:02] It's not like, which AI features are you using? It's which products that you bought prior to 2023 have AI features that you're using. And the interesting thing to me is you look at the, Inference workloads that all the cloud providers are having, it says that there's a huge amount of activity happening here, not training, but inference.
[00:07:22] And if I think about my own use, it's not that I'm not using AI products, I'm just not using a lot of AI features in the products that I've adapted to or adopted over the past decade. Transcription is an obvious one. But I'm not using transcription in Zoom or Google Meets, right? I'm using a lot of new products that have rethought the experience for this new capability.
[00:07:47] I
[00:07:47] Nabeel Hyatt: have a counterexample. Yeah, okay. It's the one we're going to use in a couple of hours to edit this thing, which is Descript. Ah. But that's a little bit of a cheat answer. Since Descript was built with AI in mind already, and so it was already post revolution, it just happened to be earlier in this cycle, pre 2023.
[00:08:07] You know, that's a surface area where they surface new AI innovations on a regular basis, and I find myself attracted and trying most of the new AI features that they ship because they're, you know, integrated naturally into a new workflow and they just work.
[00:08:23] Fraser Kelton: I
[00:08:23] Nabeel Hyatt: was playing around with some of their new like generative avatar capabilities.
[00:08:27] And I think I just use that product in a way that like, you know, I don't personally have 10, 000 training videos that I need to translate to Swahili. And so like the Haygen and synesthesias, I'm not the target market for that. But in terms of, uh, uh, in terms of our workflow, there's one. Yeah, there you go.
[00:08:44] That is AI workflows. But it's, it's, it, in a way it kind of proves the point versus not. because it was, yeah, it was a new workflow and surface area that was invented with AI in mind.
[00:08:55] Fraser Kelton: Right. It was created with the idea that this is all going to happen.
[00:08:59] Trends in AI integrations that work
---
[00:08:59] Fraser Kelton: I find that to be fascinating because I think, as you said, year and a half, two years ago, we were sitting here thinking, you know, maybe it's 50 50, or certainly the first winners are going to be things that can just extend naturally.
[00:09:10] And I probably would have listed Slack as the thing that we're like, Hey, these, this is great at synthesis and summarization and nobody likes the Slack overload. And that just hasn't borne out. Like that, that Robin's experience talking to all these people, my own personal experience. Uh, so then the question becomes to me, like, is this a new capability that's just not good enough?
[00:09:30] Uh, and or is not going to get to the point where we think it, it could be profound. Like I, I still think it's going to be profound. Or is it that it's a capability that is so new and interesting that you can't just tack it on to an existing product surface, uh, and deliver value that way? I think
[00:09:49] Nabeel Hyatt: Slack's a great example because I had this experience this week.
[00:09:52] I'm working on a side project. with a couple of friends. We can leave aside what the side project is, it's a weekend thing. And, but it, it caused us to start a new Slack. We actually started a new Discord instead of a Slack, but, but for the purposes of the same thing. And I found myself deeply frustrated.
[00:10:11] with the experience because you just, you know, have you ever started a new slack or a new discord? Like, you're like, okay, here are the five channels or the six things we think we're going to talk about. And then I find myself like coming up with a new idea. And then I'm like, okay, which of the channels does it go in?
[00:10:27] And then it turns out a week later that we're talking 90 percent more in one channel versus another. And so I'm like, should I shut down that channel? Should I take half these topics and move them over? And it just occurred to me, it's like, this whole flow is completely broken. Like I'm sitting inside. of Slack.
[00:10:42] And like, I'm thinking that, why is there a fixed structure? Why do these channels exist? Everything that used to be structured just might not need to be. Every time you look at a piece of software and you see some piece of structure to it, the question is, shouldn't that be malleable now? Like, why does Slack have channels?
[00:11:00] It shouldn't. You know, it was there to structure all of the information flow and to keep it categorized. But like, is that really relevant in a world of AI? Shouldn't I just. open up Slack, type in a message. Click maybe one button, public or private, and then just hit send. And then shouldn't AI figure out where it goes automatically Who sees it Yeah, who sees it.
[00:11:21] Blah, blah. I think this needs to go to these people based on your past context and experience. And then, When it comes to channels, shouldn't it just be like dynamically adjustable anyway? Like, you know, in other words, if we realize we need to add frequently asked questions channel, I should just add that channel.
[00:11:37] And then it auto populates from all the other channels, all the questions that were frequently asked. It knows the context, right? Another example is Notion. Every time I open Notion, I feel anxiety about page management. Where should I add something? Should that be a new page for this concept? Or should I just add it to the end of the page?
[00:11:55] What if it's split between two pages, the concept's a little bit of this and a little bit of that? This whole fixed ontology in a native age of unstructured AI, it just seems a bit silly. And I don't know if that's the truth to the response of the question of like, why AI products are not being more used integrated into the products that you're already using.
[00:12:14] But I think it teases at something, which is just, these are core capabilities. And maybe there's just way more categories than we think, where the answer is just to start with a new workflow from scratch with AI and models.
[00:12:27] Fraser Kelton: Mm hmm. I can get my head around that. It's so new, and the history of our products has been built up to actually solve for the fact that this doesn't exist.
[00:12:38] Right. And so we have to put in scaffolding and structure from elsewhere, and then this new capability, is somewhat adjacent to or orthogonal from these product surfaces. Sure. Yeah, I can, I can get that. So let's try then testing that hypothesis. You said that the one product that has adapted AI in a reasonable way today is Photoshop.
[00:13:01] Yeah. And. Listen, I believe that mid journey and others like there will be entirely new editing experiences that will come up. But why do you think they've had success in the short term? Whereas like Slack hasn't, like certainly I can't imagine anybody's using the Slack AI features. But reasonable people are using like the diffusion models within Photoshop.
[00:13:19] There's
[00:13:20] Nabeel Hyatt: clearly use of the product in that workflow. And, and I guess you're saying like, are there lessons there that might indicate categories where incumbent advantage will be okay? But here's like,
[00:13:32] Fraser Kelton: here's my initial thought is it's because that UI is already set up for like discrete tools where you're like, I want to now do this and you click into it.
[00:13:42] Whereas Uh, Slack isn't. So you're just adding a tool. A new tool. And it's not a brush, not a draw rectangle. It's a, you know, fill in the background with clouds.
[00:13:52] Nabeel Hyatt: So, but if the toolbar method works, then why haven't we seen better integrations in Google Sheets, or any of the Office stuff. Right. Those are all, you know, tool ribbon oriented UIs.
[00:14:11] Great. I don't know. I don't know. I don't know. Yeah. Good pushback. There's a world where some of these things do get integrated. Obviously, like spell and grammar check and deeper versions of that, and like write my paragraph for me and shorten or lengthen those kind of like simple affordances that are about the doc itself.
[00:14:32] I love the Photoshop example, not because of the thing that it makes me think of is not, is not tools. But a word that I've been using a lot over the last year or two, which is just, what is the altitude of your editing? What is the altitude of your creation? And I think the thing about most tools, if you just think about what Slack is, there's very little channel management.
[00:14:55] There's very little that operates outside of the, I'm trying to interact at the absolute ant level of interaction in this product. I'm on a Notion page. I'm on a Slack page and I'm just there to type. is where, right, and what do I type and how do I type is a lot of the work that was done 15 years ago to build up these products.
[00:15:15] And Photoshop is, by its nature, like you do some work at the pixel level,
[00:15:21] Fraser Kelton: but
[00:15:21] Nabeel Hyatt: by its nature, it's like loose and at a higher altitude, right? It is, I take a brush that says lighten, And I go across the entire sky and, and suddenly like it makes some judgment decisions about a huge portion of the work.
[00:15:37] There was a phrase a couple of years ago that someone said, which was just like, where's my Photoshop for text? That I've always thought about as a, as just a way of thinking about attacking the problem at a different level. I don't want something that maybe takes my paragraph and takes away four words from it.
[00:15:54] Although I use that, but that's fine. It's good. I still use Grammarly for that actually, just cause it's portable and it can be everywhere I am versus using it in Notion versus somebody else. It's fine. But what I really want is. Analyze my whole text and, and make it all a little bit more clever, or what did I need?
[00:16:10] A lot of, I do a lot of investigative work now, right? It's, what are the gaps in my thinking, right, of this piece? Where did I not come to good conclusions and test my conclusions? A lot of that kind of stuff, which is at a much higher altitude. And maybe the similar answer to my missive about Slack, which may or may not be right, was again operating at the channel level and the sending level was just again at, at the infrastructure level, how could this be different?
[00:16:36] The conversation with meter was similar. He's trying to think at a much more infrastructural level. level about how you would do this work versus at the micro level. how do you fix one router?
[00:16:45] Fraser Kelton: Let me toss another one out. You, uh, at you then is I know that when they launched granola launched around the, like writing is thinking.
[00:16:53] And so they wanted to have an AI note taker that you could also write. Interestingly, I did that a lot at the start and now I don't like, I think that the transcriptions have become a lot better. And it just runs in the background and then it's there when I want to access it. I'm not like purposely building notes and adding structure to it, but then they just launched this new feature where when you're in a note for the current meeting, at the bottom, it pulls up past conversations that it thinks are relevant.
[00:17:24] Right. And I think it's just matching on meeting guests or something else like that, but it's the, the scaffold of, you can imagine that it's just going to pull in increasingly relevant content for the conversation that you're having. Yes. And it does that in an automatic way.
[00:17:38] Nabeel Hyatt: It does, but the control mechanism is with you.
[00:17:42] In other words, if you trust it and you leave it. Uh huh. Then it will make decent decisions and you can direct it if you need to. Okay. And I really think that what AI is going to teach a whole generation is basically how to be a good manager. Uh, you know, we, like you call this like the allocation economy is what Dan Schipper phrases.
[00:18:08] I don't know if I love that, but it is working with AI. I think look less than, I don't know, less than 3 percent of the population, 5 percent of the population has any experience being a manager whatsoever.
[00:18:20] Fraser Kelton: And
[00:18:21] Nabeel Hyatt: I look at the way my kids interact with. ChatGPT, and they are learning how to instruct other things to do good work at a much, much younger age, and they are doing it every single day.
[00:18:36] In fact, what I would wish that AI would be better at, My ask for 2025 of AI is if we're supposed to be better managers in order to manage AI, then AI just needs to be better at managing up. Like what are the basic principles of managing up? And then how do you integrate them into your AI product? Should be a offsite topic for every AI company.
[00:19:00] And when I think about the conversations I have with people who are learning how to manage up, I talk about things like. You have to understand the context of what your manager is actually trying to achieve. You need to develop, like, adapting to the manager's communication style, right? This is why we get these, like, documents about how to work with me and stuff like that.
[00:19:20] And then you have to anticipate. your manager's needs and know how to ask questions they didn't ask. Right? Like you have to know the gaps of the areas where they're making assumptions and then know to ask for clarification or depth. And then you have to know periodicity of check in, like how anxiety ridden or not anxiety ridden, how much safety is needed in this relationship at this point in time.
[00:19:40] Those are all things that are, you know, not even bothered to be built into an understanding of how an AI is supposed to interact with you. And some future version of granola. I hope understands with me the types of meetings that I'm in. It understands when it needs to take really deep notes and not deep notes.
[00:20:00] And when it's not sure, it asks for clarification, right? It, it, right. It knows how to pull the best out of me. Cause I might not know how to instruct it properly. Yep. Yep. It's not assuming I'm a great manager. It is trying to pull the right things out of me. And I think I see almost none of that in the market right now.
[00:20:17] Fraser Kelton: I love it. I completely get what you're saying. I've never heard this before.
[00:20:21] Reflecting, it's still early
---
[00:20:21] Fraser Kelton: I will tell you though, I don't think anybody should be caring about that today.
[00:20:28] I had this lovely Sunday last weekend where I, my only takeaway was I have thought that this thing called AI was going to be pretty big, and I thought it was going to be pretty big for a long time, and I still think You made a
[00:20:41] Nabeel Hyatt: pretty good life bet on that,
[00:20:42] Fraser Kelton: yeah. Repeatedly. You and I have, you know, to believe that it's going to be big, and I came away from that Sunday thinking that I'm still underestimating how big it's going to be.
[00:20:50] Was there something that happened? Yeah, yeah, yeah, yeah.
[00:20:52] I switched between Claude and ChatGPT because it's just a confusing situation for me in my life right now, and so I'll say Claude, but it could just as easily have been ChatGPT. I I'm making a roast, it it helps me make a roast. I've never made a roast before.
[00:21:06] I'm intimidated. And I will tell you, I felt like a level of rapport and camaraderie with this thing as I was using it to make a roast. It sounds so strange for me to even say that. And then I'm tending to an olive tree that we have indoors that's causing some issue or has some issues. And I just like switched with it over to that task.
[00:21:26] And then that evening I sat down and I was writing a memo and it was, it was again, like a companion right there.
[00:21:36] All three of those experiences couldn't have existed two and a half years ago, two years ago, and it was amazing. And I still don't think we're anticipating how much this is going to change our day to day life as we figure out how to deliver these products. into them. And as we, as like humans and users start to embrace these products and understand when and how to use them as well.
[00:22:02] Like it has to go both ways, right? And so what I meant by like way too early in your comment is I think we are still figuring out the first step of how to deliver like the product experience that's magical to these. And I think that what you said is going to be amazing, but it feels like that's like two or three steps from now, right?
[00:22:25] And like, let's not push things to the end user today because we still haven't yet figured out the very first step of delivering the product experience. That's most delightful. You know, one thing that you've spoken to me about. A little bit over the past couple of weeks is like how to think about when to prioritize what type of work.
[00:22:43] Are you prioritizing growth? Are you prioritizing retention? Are you prioritizing the core experience? And I think we should get to that cause it's super interesting. But what I mean by this at all is I think we are still sorting through the core experience and like the delightful experience that is like the first step for this technology.
[00:23:06] We are talking primitives.
[00:23:09] Nabeel Hyatt: I don't agree with you.
[00:23:10] Fraser Kelton: That's why we do this.
[00:23:11] Nabeel Hyatt: Yeah. I think these things are interrelated in a way that you can't set aside. So my rant earlier about why don't we have an unstructured Slack product kind of thing as an experiment in social right now, one could argue that we just have more basic things to get done, which is a little bit of your argument.
[00:23:32] Like we were just super early, but part of the reason I came to the idea just live when we're riffing right now about managing up is that.
[00:23:42] Creating AI that manages up
---
[00:23:42] Nabeel Hyatt: Let me take, take a different angle at this. There's this old framework about like, what does it take to become an expert in something? And the framework is that you have to have a valid environment for operating.
[00:23:52] It has to be somewhat ordered. So you have to have a set of rules that are fixed so that can test something. You have to have timely feedback. So there has to be some feedback loop to that set of orders so I can figure out what I'm doing. but good or bad, and then you have to have deliberate practice against that.
[00:24:07] Valid environment, ordered environment, timely feedback, and deliberate practice. That uncomfortable bit where you can feel the edge of something. And if you just think about how you learn a language, or how you learn tennis, or how you learn coding, like they all fit into this, you know, I think if we think about operating inside of ChatGPT or Claude, and you think about your, your roast example, the problem with the horizontal nature of products like these is that many of the things I just talked about don't exist.
[00:24:37] Like, it is a valid environment. My feedback cycles are the biggest, most positive thing about chat in the GPT. For us, remember, the GPT before ChatGPT is the timely feedback. I get very quick feedback, and I have it in a chat format, thanks to you and that team, and so on and so forth.
[00:24:54] But, when do I do deliberate practice with this tool? Like, I never do deliberate practice with this tool. And the ordered environment, doesn't really exist for it. And so that's, I think, where my brain was kind of loosely going when I was saying, in a world where we do not have a way to get better at this stuff, the way that we get, say, better at programming, then what you really need is A coach.
[00:25:17] You need coaching up. Like I was trying to think about other unordered environments where timely feedback is iffy. And I was just thinking about the organization. Like I need one on ones. I need a manager to come to me and tell me about how I'm doing and how I might change my behaviors and so on and so forth.
[00:25:33] And without that, I don't think we'll become truly expert in this field. And that happens at the level of Claude, but maybe just as importantly, it also happens at the level of an individual product. You know, it happens at the level of becoming really good at using Descript or really good at using Granola over time.
[00:25:55] Ideally the right version of a Granola or a Descript or any of my native AI tool frameworks. is that it's got that easy to learn, hard to master, open to expression kind of feeling, where it's really quick to joy and magic, but two years later I'm still finding and navigating through the possible ways I can use this product.
[00:26:20] Fraser Kelton: What would be an example, like a suggested follow up? So Perplexity has like some, here are some follow up questions you can ask, like if I'm cooking a roast and kicking it with Claude, it would come back and be like, would you like to figure out, you know, the sequencing to make sure that this is set on the table by X?
[00:26:39] Like, is that a simple step in that direction?
[00:26:43] Nabeel Hyatt: I'm thinking one level above that, which is almost, I kind of imagine, like, why aren't we doing postmortems? You have a month long project in Claude where you're working on something forever, or you chat with it 25 times to get to the right answer, and then when you're all done, You have a postmortem session, which, where Claude is saying, By the way, you know, if you had phrased this a little differently in the beginning, I would have gotten there, like, ten times faster.
[00:27:12] I'm trying to make you better at using Claude. I'm trying to make your efficiency and understanding of how to model, how to tell the model what to do, uh, to be better. And I don't think that happens without some measure of reflection. After session. Here's a good example that's a little different than this, but a thing that I've loved doing, which is now that we have a context window that's quite large and we have some memory, like once you've gone through a really long session with ChatGPT, if you just go find a session you've had going for a really long time.
[00:27:46] You should go in and ask questions like, what questions have I not asked that I should have?
[00:27:51] Fraser Kelton: Mm. Mm hmm.
[00:27:53] Nabeel Hyatt: Mm
[00:27:53] Fraser Kelton: hmm.
[00:27:54] Nabeel Hyatt: And it's so good at that. It's insanely good at that, in a way that is closer to the feeling of clairvoyance and intelligence than most of the other things we talk about. And by doing that, if we get back to this kind of like, how do I become an expert in something, that's timely feedback.
[00:28:10] And then ideally, I go back the next time I'm doing it, and I like, internalize those notes or those thought processes. Mm hmm.
[00:28:16] Fraser Kelton: I hear you. I agree with you. And I still am struggling with it because one of my world beliefs is like, we, we in this industry, we all forget how inconsequential it is to people.
[00:28:31] All the things we do are really not that They don't want to do a post mortem. Like, their life is busy. They don't want to go back to Claude and give it feedback.
[00:28:41] Nabeel Hyatt: Okay, Fraser, you're right. You're right. And let me try and contextualize this a little bit because I don't think we disagree here, actually.
[00:28:47] Okay.
[00:28:47] PREM: A framework for prioritization
---
[00:28:47] Nabeel Hyatt: I think what you're talking about is like, where are you in the product life cycle? Yeah. I had a board meeting this last week where we were talking about a product and It's doing well. They had a list of all the features and AI features they wanted to do for the future. And we're kind of having that kind of very base conversation about strategy and prioritization and so on and so forth.
[00:29:09] And You know, I ended up just trying to contextualize for them at a very high level, just like, okay, what are the problems we're trying to solve, guys? And I'm reminded of a framework that I kind of disagree with, but was useful in the moment, which was Dave McClure's old ARG framework, which We're now, yeah, we've now been doing this long enough that nobody in the room had ever heard of this framework.
[00:29:31] So I seem like a smart person for a moment. You gotta look at recognition there. Do you want to explain what ARG is for a second?
[00:29:36] Fraser Kelton: So Dave McClure, way back when, when there was optimism and positivity online, shared a framework around how to prioritize or how to think about product. I think we were both
[00:29:47] Nabeel Hyatt: founders at the time, right?
[00:29:49] Fraser Kelton: Yeah,
[00:29:50] Nabeel Hyatt: yeah. I think the phrasing of the post was something like, how to grow products like a pirate of
[00:29:55] Fraser Kelton: course. Yep. Acquisition, activation, retention, referral, and revenue. Arrr.
[00:30:03] Nabeel Hyatt: And I think his point was basically it's a funnel view of the world.
[00:30:07] Yep. So you acquire a customer, they arrive on the website, they then get activated, they sign up or whatever, they get retained. They then refer to other people and they have revenue. I, like any cute framework, I have issues with referral should be virality because that's what you really mean, but whatever he needed to make it say are, and it was a way of just like organizing the, the world of features and ideas.
[00:30:34] Like, Hey, do we have an activation problem in this product? Do we have a revenue problem in this product? Why are we talking about revenue features if we really have an activation problem? That kind of way of just like organizing the world. I actually think linear funnels are not the right way to think about these things at all, especially in the world of AI.
[00:30:49] You're going to need all of these things, but I think the core I would use is something like you need that magic moment in the product. You need the product to be great. You need retention, which is related, but it's sometimes a very different thing. You need expansion, growth, and you need monetization. So, I'm not trying to invent the PREM framework or anything like that.
[00:31:12] That's what that, product retention, expansion, monetization, PREM. Yeah, I've got my self help book on, on PREMing coming out very soon. There you go. But I was surprised how elucidating it was for the conversation. And, and what you, I think, were the point you were making just a second ago is that we can't be having conversations about retention and expansion You know, me putting in the work to try and get better at this product when the product itself is still not magical enough.
[00:31:40] Like, people are impatient. They want the job to be done, to be done quickly. You're still trying to solve a product problem. You are not trying to solve a retention problem yet or a becoming an expert at the tool problem yet. And so, like, you're just solving the wrong problem. And that's, I, I, I take that point.
[00:31:57] You know, I, I do think the market generally is still in the, You know, there aren't enough magical products. Look, we started this podcast with the idea that every week or two we'd sit down and talk through a new AI magical product experience. Right. It's hard. And whether we invested or not, just like, just like, let's talk the craft.
[00:32:17] And like, we don't have something to talk about every week. And, uh, we would talk about them once we come across things and we talk about that structure. But I think you're, I think you're right. I think more people need to be focused on product and those magical experiences. And what I was thinking about, probably because I was having a conversation about Granola and I was having conversations about Descript and some other products where I actually really think they do, have that magical moment.
[00:32:42] They have that thing. Yep. And so now they're asking the question, what happens when you come back the 50th time? Yep. And how do I expand your capabilities and help you become an expert in that thing? And that's maybe where the conversation came from, which is a different prioritization conversation than you see when you're looking at the broader market.
[00:32:58] Yeah. Yeah. Yeah. I get it. Both of those things are true. All of that makes sense. Are there any new products you've used lately in AI? I mean, there's a lot of stuff we're trying every week, but I mean, something gets turned into. A habit that you've picked up.
[00:33:12] Fraser Kelton: No, I mean like, uh, yes, depending on like how we want to frame it.
[00:33:16] Granola I use multiple times a day, every single day, and it is delightful. I mean it's not new, but I continue to use ChatGPT and Claude to a degree that is continuing to surprise me. Yeah. In ways that are continuing to surprise me.
[00:33:31] Nabeel Hyatt: How about you? I find myself using both Wordware and Replit
[00:33:35] I know the agent framework that they launched isn't as deep as some of our other friends doing AI coding, and I know that I can AI code inside of even inside of Claude or whatever, but the simplicity of the already live environment,
[00:33:51] Fraser Kelton: uh,
[00:33:52] Nabeel Hyatt: and, and type it in and iterate very quickly. I like what both of them do in their own way , and I find myself building small tools with them all of the time.
[00:34:02] Fraser Kelton: Nice. I think whoever has the winning horizontal AI product is going to be as big or bigger than Google. I just
[00:34:09] Nabeel Hyatt: keep getting drawn back to Claude. It is frustrating in a way because you want to have a portfolio company. It's not a company, but yeah,
[00:34:17] Fraser Kelton: I sent it my health information. I sent it the photo stuff that it can then infer against is amazing.
[00:34:24] I took a photo of a plant and I'm like, help me rehabilitate this. And it's like, here, okay, here's the three things to do. It's amazing.
[00:34:32] Forget incumbents, how do startups fight fend off the LLMs?
---
[00:34:32] Nabeel Hyatt: Maybe the real answer is actually this is not a fight between the old incumbents pre 2023 and the new startups. The old incumbents will do what they do, they will be the way of FM radio, which is to say that they still will exist in 50 years, but are much less relevant.
[00:34:49] This is a fight between whether any new startup will be able to carve out a strong enough new workflow and use case. Right. That I wouldn't just use Claude for it. Right. Or ChatGPT for it. Like that's the war. Yep. Um. That's it. Because every time you come across something, you're talking about whether it's a strong enough new pattern of behavior.
[00:35:10] That I wouldn't just like type it into this thing that already is in my life and is already in my pocket and already has my context from our previous conversations. And, uh, I mean, thankfully we're obviously investors in, in, in, in, in one of those, in Claude. And, but the answer is not deploy the whole fund and all of our founder energy in the, in the market into that, into one of two companies, is it?
[00:35:35] It can't be.
[00:35:35] Fraser Kelton: No, but I do think that we should then be on the lookout to see if there's another assistant, like, broad horizontal play that has a different worldview that is opposed from Claude, which is to say, like, I don't know if this market's won. The only thing that I have in my head from this past week is I think I am guilty of underestimating how big the broad horizontal product is going to be.
[00:36:00] and how ever present it's going to be in our life and how early we still are in that cycle. I don't think any of the stuff that OpenAI did in the last year was right. And this is where I was coming with you. Like, I don't think they should put in those rails yet. Like, I think that they should just allow it to be as good of a horizontal broad product as it possibly can be.
[00:36:21] Like, if you and I are still just surprised at how often we can come back to it. For value, it's just going to like
[00:36:27] Nabeel Hyatt: ride the curve. So, two points. One, that was my managing up comment though, Fraser. My managing up comment is that that is the fault of the product. That a good product helps you understand how to use it better over time.
[00:36:43] And right now, the affordance of the way that we use these products and the way they speak back to us do not help make us better. They do not make us experts over time. For given the number of hours that we have spent in these products, and especially given you were actually also on the builder side for these products, Fraser, we should be experts .
[00:37:01] And we're not. I would fault the product for that, not just the market. The kind of second point is, I think a lot of people are trying to attack generative AI from a really verticalized approach. So what is a vertical SAS? way to use this for finance and operations or for blah, blah, blah, blah.
[00:37:24] What are the new horizontals?
---
[00:37:24] Nabeel Hyatt: And there's some great success cases there. You know, I'm, I'm using this for medical scribes and you get a company like a Abridge that is doing incredibly well and so on and so forth. But you said something that I want to reflect on maybe for the future, which is like, are there other horizontal assistants that are like, for what set of customers in the world This Claude, ChatGPT as a structure, just not feel like the right horizontal assistant in your life that is there every day.
[00:37:53] I had not really thought about that before. The Dr. Scribe example is that. It is an everyday use case. It is broad to that person's career. I'm sure as they build out capabilities, it will do more than just transcription, of course. But it's a, it's a horizontal, not vertical view of the problem that you just mentioned.
[00:38:11] Because, of course, that's what Jan Rivea is good at, is tackling broad, long tail things.
[00:38:17] Fraser Kelton: On the managing up, I, I get it. I agree. Like, I agree with you. I think it, you did the right job of pulling it back to like, where then are we on the product area? Like, I would be so cautious with forcing stuff too soon on that if I was still running the product.
[00:38:32] Like, cause the aspect that it becomes clippy, I think that's, A very dangerous place to go to too soon because the cost of getting it wrong is so high. I didn't even think of using it to tell me, like, when did I just did my recipe for my roast three hours ahead of time. and how to just like tailor it perfectly to what I wanted to do.
[00:38:55] I'm like, I want to eat at 5 30 and here's the stuff that I have and here's the actual photo of the meat cut. Tell, tell me it all. Yeah. And if it had come up with like a suggested reply that was like, you know, 10 percent chance it would have gotten what I actually wanted and it would have been annoying.
[00:39:10] But Fraser,
[00:39:11] Nabeel Hyatt: that's literally the problem. Like suggested replies. Knowing these companies and how fast they're running was probably one dude for two days and built it. That's right. Okay. And shipped it. And so of course it's bad. Yeah. Okay. We're saying the same thing then. Right. And my point is like actually the product vector that might need the most improvement.
[00:39:35] Right. Sure. Is, is, is related to suggest replies and ontology on top of that. It helps teach me how I'm supposed to think about it. What replies I'm supposed to be giving. Why did these suggested replies come in this way would teach me how I might want to write in the future. Fair, but like
[00:39:53] Fraser Kelton: memory was clearly something that deserves to exist in the product because it will make it better, but you can't put one person on memory and then like ship something that's janky.
[00:40:02] I don't know. Do you use the ChatGPT memory featuring derived value from it? I certainly don't. I,
[00:40:08] Nabeel Hyatt: I use it in advanced voice mode sometimes because I will say something and then I will say I want you to remember that. Uh huh. And like, is it useful down, down the line? No. Well, I have not found the other side of that use case.
[00:40:23] Fraser Kelton: Yeah, of course. Like, No, that's the thing where it's like so obviously important, but if it's done poorly, it's just going to suck. Done poorly or too soon or not enough resources are not thoughtfully enough. The other observation that I had from this is I used to think that like companion was hokey.
[00:40:39] Like I used to just think like, why personify it? This is all hokey. What are we doing? And I'll tell you, I wasn't at the her moment on Sunday, but I was like, this is kind of my buddy. Like, this is my, my buddy. Like I don't know how to describe however the tone is that they have with Claude when it told me not to use chicken broth.
[00:40:57] It did it like with a little personality and it built rapport. And I was like, interesting, interesting. All right. Should we be done? Yeah. I, I think we should be done for
[00:41:08] Nabeel Hyatt: today. Fraser, I come away with new thoughts and this is added value, no matter what anybody's thinking about or listening to. If any, I don't know if anybody's listening to this or not.
[00:41:20] As, as usual, Fraser, I feel like I come away smarter from, from chatting with you. Thanks so much. Cool. See ya.