[00:00:00]
Chris: There's like almost as much of a chance that you mess up the thing that you have than if you make it better. I
Nabeel: think the reflex for most AI products is like automation of a workflow, which necessarily requires a new behavior.
Fraser: My bar to try a new product is exceptionally low. Like I said, great joy to go and tinker and explore.
And then as Nabeel knows, my bar to stick with the product is exceptionally high. But I do think you're building a fundamentally
Chris: different product if you are trying to replace what someone's doing versus just superpowers.
Nabeel: Hello, everybody. Welcome to All The Way Chat. I'm Nabil. I'm Fraser, welcome back. And today we have a special guest. We have Chris. Hey, Chris.
Hi, everyone. I'm Chris. Chris is CEO of a company called Granola. Frequently on the short lists of AI products that you use every day for most everybody who's using them.
So if you haven't used it, you should try it. And it has some pretty different product philosophies.
And [00:01:00] so we thought we'd have a little product conversation with Chris today. And from there, who knows where it's going to go.
Second time founder syndrome
---
Nabeel: Before we get to product, Chris, we have this topic that Fraser and I, I've had sitting on this list for hallway chat for a while of things you want to talk about.
The periodic thing that happens every, I don't know, a couple of times a month, former founder comes to you and says like, okay, I've shut that thing down. I'm now thinking about what it is I'm going to do next. That like second time founder thing
and I've had that conversation, I don't know how many dozens and dozens of times over the years. It feels fraught because you're gonna maybe over rotate from the last thing that maybe didn't work or, maybe know too much about what you're going to go do the next time and how hard it is.
The path to Granola
---
Nabeel: I don't think we're gonna do the full bio and how granola happened all around that stuff. But, I think if I started a new company tomorrow, it would be a really weird and hard road to have the courage to look at all the AI note taking apps out there in the world and then be like, You know what the world needs?[00:02:00]
One more. One more. So talk a little bit, can you just talk about what the journey was like for you?
Chris: Sure. Uh, yeah, yeah. I'm happy to talk about it, it can be useful to hear people's stories. Like, my formative experience was the last startup I did, and that company was called Socratic.
It was Uh, an AI tutor on your mobile phone, and it was aimed at high school kids. They were, if you're stuck in a homework problem, you take a photo of, your homework problem. And then Socratic would try to teach you how to do it. This is a previous wave of AI. Like we were like, our AI was linear regression, you know, models. It was like a very, very different type of thing.
It was quite successful from a product and usage and growth standpoint. Shannon and I, we ran that company for five years and, it was acquired by Google and it had something silly like when I left Google it was getting 4 billion queries a year, which is a lot for an iOS app that's, you know, primarily in the us.
I think every founder has scar tissue from their, well, a lot of scar tissue, like scar tissue all the way down. But like, [00:03:00] you know, company specific one. And what we didn't do with Socratic is we really didn't, we were just one of these like, you know, we're going to get, we're going to get huge and then we'll figure out the business model later.
And, and while that is, I think sometimes a very good strategy or can make sense for me personally, I knew in my next startup. I wanted to work on a product where it was very clear who was going to pay for it and why they would pay for it.
Whereas in education, I think the main mistake we made is that we built a product for students, but in high school, the actual person who would be paying for it is the parent.
So I knew that going into, into this one.
Nabeel: How did you pick a category?
Chris: I talked to other second time founders and they were way, way, way more.
uh, analytical about it and process driven. I talked to one guy and he spent a while exploring and he came up with 10 ideas. And I think he spent like a month on each of them and basically saw how far he could get in a month and what the pitfalls were. And then he like did a crazy analysis of what to choose.
But you can over index on what you know, and the [00:04:00] reality is, you know, 1 percent when you start and the whole space will be defined by the 99 percent that you don't know, and we'll only discover as you work on it for multiple years. So I think there's a, there's a bias of the known versus the unknown, which can be very dangerous.
Starting by "playing"
---
Chris: What I did is, so I like Google bot Socratic, and then I quit Google knowing I wanted to do a startup. And I didn't have an idea and I didn't have a co founder. And it wasn't a hard date, but I gave myself a year to explore. And I wasn't looking for a startup idea on day one. I was looking to play.
I'd been at Google and then running a team or been busy and become a father recently. And I had none of the headspace that comes from that creative exploration that has no. No specific goal at the end of it. If, if that makes sense. Yeah.
Nabeel: There've been no open ended play. You're very deterministic. You've got OKRs, you're at Google.
You got a thing to hit. You hadn't, wandered.
Chris: Yeah. So here I was like, okay, I just need to play with stuff because usually the real signal [00:05:00] comes from when you're like messing around with something for a while. And you can kind of build these intuitions and. And like, as luck would have it, like two weeks in, I just started playing with, GPT 3 and and What had happened is that the instruct version had just come out.
And I was instantly hooked, but from like, uh, Oh, this is different. Like most people in tech has this experience in the last like three years, one point, right.
Where you're like, you play with this and you're like, Oh, this is different from what came before my mental models. Break like what's possible my intuitions about the technology are way off because it can write like a paper That's pretty impressive, but it can't be basic math. Like what the hell's going on there, you know all that stuff and I just basically built shit for myself and to play around with projects.
Were you doing a lot
Nabeel: of interviews at that time that lead, that led you to something like Granola? Were you taking a lot of notes? Was that part of research you were doing? No, no, I, I Conversations with your kids?
Fraser: Wait, wait, wait, wait, wait, wait, wait. Before we get there, [00:06:00] what's like one of the silliest ideas that you explored and then tossed away?
Chris: I'll tell you the main, like the main one. So it was like back to this, like wandering in the desert thing. I was like, okay, I just quit my job. I'm living in London. I don't know that many people here. I want to do a startup. I don't have a co founder. I don't know what to work on. Like, there's probably a lot of processing, thinking, exploring, you know, and instead of getting a therapist, I was like, okay, I'm going to journal.
You talk to ChatGPT about it. Yeah, exactly. But like what I did is like, okay, I'm going to journal and writing in a journal is kind of slow. And, and I was playing without a lens. And I was like, man, I wonder if I could just talk to, like, could I just talk a diary or a journal? And I think there are these like staple ideas that everyone builds as they're like, Oh, I should do, you know, I want to do an AI or whatever.
I think this is like one of them, but again, I didn't do it as it startup I did it, for myself. I built this like really shitty iOS app where I would press a button, put on my AirPods, walk around London for 10 minutes talking to myself.
But luckily with these things, they don't look crazy. And then it would, [00:07:00] uh, transcribe it and then it would turn it into. a journal entry, right? That's like the first version. And then like the really cool stuff was , now like hyperlink every proper noun or name of a person.
I want to see everything I said about my daughter. to see that on my timeline. And this is the kind of thing where it's like, it's one API call and you can be like, Oh, pull up sentiment. Or, pull out the core ideas here.
And then it was just like, you realize what you're really messing with is the structure of knowledge and information. And that thing is like malleable and stretchable and that's what LLMs let you do. And so from this like ramble from walking around, you could end up with specific and distinct ideas ?
Yeah, you get it. And this is all just playing around and it's super fun. And that's the point I looked up and I'm like, is this a company I want to start? And this was like, hell no, as much as I'd like, I'd love to build this tool.
And it's interesting. I, I, I just, I thought it'd be really challenging to build a business around it. I hope, I think there will be one, but after, you know, a couple of years ago, it seemed like a. Much harder than it would today
Nabeel Hyatt: for what it's worth [00:08:00] having heard that story. You said at the beginning, it's not that applicable my particular story, but I actually think a lot of folks who start their first companies go in with a kind of like raw naivete.
They were playing in college or just afterwards with some friends and they go build the thing.
Nabeel: And then it
Nabeel Hyatt: And even if they like just made millions of dollars, they come out and they're like, Oh, no, but I'm so much smarter.
Now I would do it differently. And they often go down the deterministic approach. And I think it's completely wrong. Like I actually the number of people that have done the thing that you mentioned earlier that your friend did the every month. Or I've had friends who did I'm going to write an investor memo as if I was a VC.
Uh, before I start the thing, I'm going to write 10 of them. And then I'm going to go through the process and I'm going to take a year. And I have lots of folks that have gone through that.
That feels
like,
a pretty bad way to find Net new things, unless it's a really specific kind of startup, which is you are trying to go after market arbitrage, like you were trying to look for a seam in the world and then build some slightly better version. [00:09:00] which is very different, it's faster horses versus disruptive thinking. We've known each other for a long time, Socratic was also a Spark company , and I think of you as a founder, Who is like going to wander and then like actually pull a seam and see where that leads you.
I can see the Connection between your journaling app and granola. To be honest, we don't have to go through every moment of that to know that. Oh, the thing that you did there is just listen to yourself. You got to a spot and then you didn't stop. You didn't say you weren't satisfied.
You're like, okay, so this is good. Can we get better a little bit further? Okay, this is good. Can we get better a little and you just keep going
Chris: I've actually thought about this a little while, but what's amazing is that when you're actually doing a startup, it's almost impossible to have the time to do that exploration.
And oftentimes the newness, the insight, like the core insight or signals that you're going to build on. You have to be in that state to look for it. It's a hard thing to be able to do both. And I actually, I was lucky, like I had the luxury of a [00:10:00] year to explore it.
Now that I'm, you know, busy building Granola, I'm kind of like, man, I wish I had more, more time and space to like, really like explore and think deeply about these new things, because I think especially in AI, because it's so much newness, like make new things to invent that, that approach and that perspective is super valuable.
Nabeel: Well, I hope you find a way to stay open to it. I think that is the fight because it isn't just one gun that went off at GPT land, as you know. Like it wasn't just a thing that happened three years ago. And so you encapsulate your worldview in amber and then go make your bet and run with it.
This is one of those situations where, like. It changes again and it changed again and changed again. And, and so, yeah, I don't know how you fight for that. If you, if you figure that out, please let us know. Cause we will tell all the founders. Uh, yeah.
Fraser: I just with great joy, listening to that story.
I love those stories. I could listen to those types of stories all day long. The idea that you had given yourself a year is part of it, but then the idea that you. We're both disciplined enough to build an experiment and explore and then malleable [00:11:00] enough to sit with it in the lane like this philosophical, what is this?
What do I have? You can see like the dots connect right to what you're doing with granola.
But like, if you had set up, okay, we're going to run this experiment for one month and then move on, I can't imagine you end up in that mind space .
The friend test
---
Chris: Yeah. So I think. Like products tough, right? It's like the left brain, right brain. You kind of have to, you have to do it all, which is difficult. And I think an important part of the story is that once, uh, I met my co founder Sam and we decided we were going to prototype stuff together. We kind of took insights that he had and insights that I had.
We fleshed out different directions or ideas. And then we showed it to like 30 people. And I think that is the other side of the equation, which is super important. So I think it's like, you need to build your intuition. Like, what do I know? And this is from my limited life experience, right? But you build an intuition of what you what you want in the world, what you would use, like what you think is important.
And that needs to be like pretty deeply felt. And then it's really, really important to put that in front of a lot of people and you look at it from their perspective.
And out of the, I don't know, a bunch of ideas [00:12:00] we, we showed it to folks, but. And people's eyes glazed over on all of them, except for the granola one, you know, and it wasn't exactly what granola is today, but it's the same idea. And then Sam and I were like, Oh shit, I tried to make another note taking app.
You know, that was the, that was the ethos of going into it. We're like, all right, like literally their eyes sparkled, and it was like, okay, this is the one. So I think it's, you could get
Nabeel: lost in here.
Product philosophy behind Granola
---
Nabeel Hyatt: Well, there's a product philosophy to this product that it's strange to me.
We don't see more often. Every other note taking app does exactly what you expect it would do, at least historically. I'm sure they're all copy you very quickly and we'll get to there, but they do the thing you would expect, which is like almost the same as your journaling app.
They take your rambling conversation. And then they spit back out this table of context summary, which is. never quite good enough. It's okay, but you remarkably never find yourself ever reading those generic meeting notes again. . It's [00:13:00] just done.
What you do is different. we've been hemming and hawing for a little bit about what do you call the way that you built this product?
What is the, what is the Malcolm Gladwell version of explaining this product philosophy? Cause it looks like a notepad. It's a product philosophy rooted in invisibility and flow . .
Chris: I think what's confusing about Granola is that today, what Granola does is that it helps you take notes in meetings. And therefore, it looks like all the other AI meeting, like, bots or apps out there. That's not really how we think about granola. We all kind of look like the same thing, but I think we have very different trajectories.
The thinking behind granola is at its core is like a tool for you to do your thinking, to do more, right? And that might seem like semantic, but I think there's like a fundamental difference. In terms of like what we're doing. So my co founder Sam, he came from the note taking, tool for thought world as well, and he had a back to thought from this.
And I think when you set out to build a tool that at its essence is really, uh we want to [00:14:00] be, like, the next version of paper and pencil. Like the, there's a bicycle for your mind. It's like the, the motorcycle for your mind. Meetings are a very convenient place to start because first of all, transcription and meetings, it's a filler feature and all that, but they can take a rambling transcript and make something useful out of it.
It's an easy moment to build a habit around. Because there are things to meetings are scheduled, we can send notifications really have it's really, really hard and product, but that's not the end all be all, we might focus on meetings for a while because of the luck we can still do to make it better.
But we really want to be in this place where hope you do better work, better thinking.
Fraser: You first started by calling it something that feels similar to any of the other AI note taking apps. And I would say the thing that I love about it is that it's not you held up a pen and a paper and I was going to say it feels far more akin to pen and paper or Apple Notes in the sense that it's familiar.
And it doesn't try to over promise on the technology side. And I think that there's an awful lot of discipline and product craft that [00:15:00] has to go into getting it to that point, as all of the complexity is hidden away from the end user.
Chris: Yeah, if you take this core belief that, hey, we want to be a tool, like pen and paper that is just, just works, that people can grab, that doesn't get in the way, that makes them better.
And you really think about what it takes to do that. I think you can map back a lot of the perhaps kind of weird or unexpected product decisions that we made. Like the fact that it's a Mac app. I think that was like something very different. Like it's an app that's on your computer. And like the thinking there is like, it needs to be really easy to grab, right?
Like it needs to be as easy to grab as your notepad. And it needs to, like, if it's in a tab in your browser, it gets lost. Like this is literally something that happens. So you try to take notes in the browser tab and people have too many tabs open. You you'll never find it again. Or you need to be able to use it for any meeting you have.
Like, if you have to think about, Oh, this is a zoom meeting. It's going to work. It's not going to work on a Google meet meeting or a hangout or, you know what? Like this in person meeting, like there are all these, like. Kind of [00:16:00] requirement. Can you have that lens of like, what's it take to be that's like on the present tool that I think like granola makes a lot of sense there.
But if you look at it compared to why'd you build a Mac app that only a tiny percentage of the world can use, you know what I mean? And like, when we started it, only people on Mac OS 13 could even use it because before that you couldn't get system audio. So it was like a tiny, like little sliver of humanity or like, it seems like a dumb idea, but it'll work out
AI as Augmentor
---
Nabeel Hyatt: something about the way you just phrased that.
The reflex for most. AI products is automation of a workflow, which necessarily requires a new behavior. So you can imagine that kind of canonical Oh, I want to make something a little bit faster. And so I've got a bunch of if then statements of popping out to an LLM and doing a thing, and I want to automate that.
And it's a very engineering mindset. And I need you to engage in a new behavior. You're going to read table of contents transcripts after you finish a meeting. That's your new behavior because we just ran this workflow. And I think In a way, what, what philosophically you're talking about [00:17:00] is like instead of automation of a workflow which requires new behavior, it's augmentation of a current workflow which you stay in flow about.
And so it's the same behavior but better. Right. It in it, it is the. You know, equivalent in code gen for AI of Devon, which is going to go off and do a whole bunch of things, but I feel like I have a distinct loss of control and maybe we'll see if it'll work out. It won't work out when it's done, but I have to go and evaluate it afterwards.
That's the kind of automation of workflow. And you're like, no, we're Closer to GitHub Copilot. You're going to code the way you're going to code. You're going to do the thing. You're going to be in flow. And we're just going to help you do it
Chris: Yeah, absolutely. I think, I think there's like a, I mean, there's a spectrum, but I feel like there's almost like a philosophical stance you need to take when you're building an AI, which is, are you trying to do the stuff for the person?
Are you trying to outsource it to the AI? Or are you trying to like give the person superpowers to do it better? I went to this talk by David Holtz, the founder of Midjourney, and he went down this rabbit hole where he got really obsessed with the like [00:18:00] AI versus IA.
Like back in, Marvin Minsky and Engelbart times. And it was kind of like, Oh no, like the future computing is like artificial intelligence computers can do everything. And then you had like Engelbart being like, no, no, no, no, no, like we built these tools.
So that humans can do 100x what they could do before. And solve problems they could never solve before. And I think in some ways it's kind of silly today because it's like, I want AI to help me to augment my intelligence. But I do think you're building a fundamentally different product if you are trying to replace what someone's doing versus give some superpowers.
And Granola is 100 percent in that category.
Fraser: The promise is you jot down what you think is important and then you have this little buddy sitting in the meeting over your shoulder who then fills in all the rest when it's needed. I don't think you deliver that product if you think of the world through that first lens, like if you're trying to automate, you end up with a very bad experience.
I feel like that, that promise is super high, but exceptionally hard to deliver. And at the end of the day, I don't think most humans want that, right? Like we have a lot of insight bouncing around our heads and like we capture that in [00:19:00] little snippets and the idea that an AI is going to totally automate.
Like the insight that you want to capture in that moment feels like a fallacy.
Chris: Yeah. I think it's super easy if you look at the world and there are all these meetings. What's the output of the meeting? It's this text document. These are notes, right? Okay. My job is to capture data, record the meeting and then just generate this text. If you look at the world that way, then, you build a certain type of product. And I think , you're totally it's like what humans do in the world is like, It's so much more complex than you realize until you actually really dig down and think about it.
And like, until
Nabeel: you're trying to describe to an LLM what they're doing. Yeah, exactly. I did what you have to do. Yeah, you're basically like, all
Fraser: you know, that's interesting. So, so that reminds me, you wrote this, blog post on Every, which is an awesome publication. And one of the, like your big insights was that you can't write a rigid set of instructions to get these things to work.
You have to treat the AI like a smart intern and give them context. Maybe you want to elaborate [00:20:00] on that. I thought that was really interesting way to think about it.
Chris: I think it basically comes down to this idea that the world is really, really complex. So if you have a world that's very complex and you could be put into a meeting of any arbitrary kind of situation and you need this product to write good notes.
It's almost like you have to view it like the values and goals that you have in your life. I think that's like the ultimate end state, you know what I mean? It's here's what I'm trying to achieve in my life, and here's what I'm trying to achieve in my job, and here's what I'm trying to achieve, in this meeting with these people, these relationships.
And it's completely impossible to list a bunch of instructions, which is if this is true, then, and this is true, then do this. Because those instructions will 100 percent conflict. So it's be concise, but also remove all personal information or chit chat or banter from the meeting, except if it's like, well, you know what I mean?
There's no way to write those things. Whereas I think the models are now smart enough that if you can try to give them context as you would an employee. Then much, much, much more likely to talk about the stuff that you care about. [00:21:00] But yeah, it's, it's funny.
It's like a, it's definitely. It's easy to say it took us a while to get there and the prompts are completely different. Like they're much shorter and they're much shorter and they're also much more specific to you, right? To try to understand like what goals you may have in that interaction.
Nabeel: Would you ever expose those kinds of prompts or that kind of control up to the user? Because there's a conflict there between the thing. That just works and feels right with you know, as you use any system, you grind the curve, it's two years later, you can be an expert at this system, right? The first time I ever use a diffusion model and wanna make a sign and chat GPT, I want.
Before, before it passes off the DALI, I want Chats with BT to augment my prompt like crazy and try and make it make sense so that the thing that comes back is kind of good. If I'm 10, 000 images in the mid journey, I don't want you to change a single letter because I might, I have now built some [00:22:00] sense of nuance of what I want and I know how to express it in a way that I wouldn't have a year earlier.
Balancing beginner and expert users
---
Nabeel: Is that true here? Do you think you'd ever put this into expert mode?
Chris: So, yes, we would. But, like, my philosophy is, like, the most important thing is that it just works out of the box for people. They don't have to think about it, right?
So I think that's like a non negotiable. We have to do that. I think once it works out of the box and people are like, oh, hey, this is pretty good. Now I want to make it amazing for me in these specific instances. We definitely want to let people do that. We haven't figured out how to do that. And I think the thing that scares me there, you guys probably have big thoughts on this, is It seems really easy to shoot yourself in the foot as a user, right?
And or to end up in like in, in a dead end kind of situation. Like a good example here is system prompts on PT, right? Yep. I went in there, I wrote a system prompt. It made stuff better. Four months go by. I look at it. [00:23:00] It's no longer correct. You know what I mean? It's like, it's like out of date. I never would have noticed it.
Like, I think you have to be pretty smart about how these are like living, breathing, evolving systems. Also, we want to be able to change the underlying model. That's the other thing, right? Which might break if we let the user like, have a whole lot of control, like the middleware. It's definitely something we want to do.
It's just. It's something I think we haven't figured out exactly the right way to do it.
Fraser: Nabeel and I have strong opinions as you insinuated on this, and I feel like we're usually like a polar opposites. I put this in the bucket of what you just said, where it's like, my thought is that it's something you want to get to, but it's five years away.
And I hope it always stays five years away from you. Uh, I, I like, I understand why system prompts are a thing. I've never changed mine. I can only imagine how elaborate. Nabeel's is I think that the best technology products that I've used feel invisible. I think most people don't even consider that when you turn on a faucet, all of the complexity that's like literally buried in your wall and underneath the road to get clean [00:24:00] water to flow into your sink.
And it's a lot to learn. And you know, I There's a reason why, uh, there's a whole bunch of reasons, but like that our taps are so simple and that we know, like, let's just turn them on and get the water flowing. The thing that I love about your product is that you have such an opinionated point of view where it does feel like you've made decisions for the end user to make it as simple as possible.
And for the most part, it just works. And I'll tell you, Chris, like I have the utmost respect for you. And then every now and then I stumble onto that little. Overlay where it says like, here's your custom templates and I'm like, Oh no, Chris, just get rid of that. Why is it there? Why, why
Nabeel: well, custom templates, Fraser what are you talking about?
Chris: So the model we're pursuing, who knows if it'll work out is basically minimal UI. Try to keep as little visible to the user by default, but if you go bigging, kind of like as much control or complexity is, and I think you kind of need both. What was interesting is that we built the first version of Gridola.[00:25:00]
We spent a year with people using it, right, getting feedback. And after a while the feedback was like, oh, the notes are all right, but really I wish I could structure it. Like, I wish I had templates. I wish I could structure it or whatnot. So we built the first version of templates, which is basically in the app.
Cause we, and what happened is people didn't really use it. Like, like it's more work and not saying like the templates aren't great. You know, we can make them better. People need some more, but the vast majority of people didn't use them. And then we're like, Oh, actually, what we should do is we should, now that we have these templates of what we think are really good notes for all these different scenarios.
We should try to deliver those notes automatically for you in all those different meetings, right? And then what we did is we basically spent all that time and energy trying to get the out of the box notes to be, you know, closer to if you had chosen the perfect template and why not?
Fraser: That's a case where I completely am in agreement with Nabeel, where his like, his advice that I've heard him say a number of times is like, do not listen to your most Active power users in the earliest stages and deliver what they want.
Chris: This is why I don't listen to Nabeel. Yeah,
Nabeel Hyatt: [00:26:00] I think there's at least three things that you just
Nabeel: like
Nabeel Hyatt: embedded in all of that, that it's worth pulling out for a second, Chris. So the first of all is, you know, the first time we met talking about this company was a different product, even though it was an AI note taking product.
And,
And you didn't mention earlier, but
Nabeel: like
Nabeel Hyatt: Socratic
was a
Spark company. I met you a long time ago. And a lot of
Nabeel: this thing, the habit,
Nabeel Hyatt: the
kind of rhythm of what you're saying,
if people don't pick it up, I like that I pick up is how does this, I think it's close to the last line of the essay that you wrote in every, which is just like, how does this product make me feel when I use it?
And having the kind of like empathy to be trying like pierce through whatever customer feedback you're getting or whatever they're saying and trying to listen to what they're saying about how they feel versus all the other nouns they happen to be using when they're talking about it and then not being satisfied.
Those are the two traits I kind of assigned to you in the way that you navigate product in this particular situation, the thing that you need to have in order to become an [00:27:00] expert in order to want to. Open up the templates tab and type in your own prompts. And you're like I'm a person before granola, I was using a product called super powered, which was great.
And one of the things I loved about it is that it gave you prompt control. And so I had spent months refining down. Different prompts for the four or five different types of meetings that I had and getting them right to what I wanted and so on and so forth. And by the way, then readdressing them a month later and changing them again.
Fraser because these are fluid things that change all the time and all the rest of that stuff that is not the way most people are going to behave. But the thing that's true is anything that you do a million times, you start to build a nuanced sense of it. So the first time you do something, you just want a job done.
The act of becoming an expert in anything is basically four things. You need to be in a valid environment. Uh, in other words, you need to understand the rules and repeat it. It needs to be an ordered environment. It needs to be a timely piece of feedback. And then you need to have deliberate practice doing [00:28:00] that thing.
That that's kind of like, like a nature of any game you play regularly, you get better at it. You become an expert, right? It's valid. It's ordered. It's timely. And it's deliberate practice against that thing. And I actually think that meetings fit almost all those categories. Like it's a thing you do regularly.
You're going to look back at that feedback afterward. The notes you took when, if you use the notes, ideally.
And you want it to be better and when you were better, and that's going to, I think when you get to year two or year three of somebody actively using granola, it's going to give people a really fine grained sense.
of where you're messing up or not, that eventually the like little prickly bits of granola that aren't quite right for my particular use case are going to crop up. And it's just about user control at that moment for the like 15 percent of people who care because it's the 15 percent of people that then Augment that stuff like Fraser.
I get it. You're not going to do it. The whole deal is like at spark. I'm going to be the guy that goes, does that. And [00:29:00] then, and then I just need the power to go hand that back to Fraser and be like, yeah, I did all the work. I prepped all the stuff.
Nabeel: Like,
Nabeel Hyatt: here's your new spark templates.
Chris: What I want to do, I actually think notes.
I think like notes are fine. I think notes are going to be less and less important as like a format in the future. What I would like to give, what I'd like to build into, I want to build a million things to grow. But something I think would be cool would be like a artifact generating, you know how there's the chat on the right hand side and you can kind of chat with a meeting or whatever.
It's like, you know, like a cloud style, okay, write a follow up email for this or, or generate an investment memo, whatever, whatever it is, whatever the next step is. And there I would like to let Nabeel write a super, super, super detailed, maybe multi step prompter workflow and then share that with Spark. So you can basically be like, here is the, here is the like Spark secret sauce.
you know, [00:30:00] whatever, like analysis, a company analysis, like whatever, because I think there's something pretty cool there. Because if you, if you can get the one person who's willing to put in the work and the other folks can benefit from it, I think, and then there's like a social accountability. So that way that the thing doesn't like wither and die.
If people are actually using it, they're like actively.
Fraser: You're, I'm such a fan, like you're, you're here because you have a note taking app. And you said, yeah, you know, note taking isn't really the thing. Here's what I want to do. And that's my rebuttal to you Nabeel when I was making the faces. I don't think you want to cater to the 15 percent power user on note taking.
Like, I think that's a dwindling, small, uninteresting place to be. But there's like, if you own the place where we're capturing our, I don't know, like pen and paper. In a world where this technology of, like, artificial intelligence and everything else exists, I think there's, like, so much that you do with that.
And the breadth and the depth of that value's gotta be much larger. [00:31:00]
Nabeel: I don't disagree with that. It's just a horizontal product. And, and it's just not a vertical, like, and not every startup is a horizontal product and in a horizontal product world, the question is like, will the LLM know better about how I take notes in my accounting practice in Bangladesh, or will Chris figure that out, or is there an intermediary if somebody has been using and loving Granola?
Where one of those accountants in Bangladesh becomes an expert and you see it in lots of horizontal products, they need to be easy enough and simple enough to get started and then you have this layer of experts that kind of like emergence bottom up like, you know, like the. Economy of people making weird notion templates that you download, and that doesn't mean you make a notion for the people that make notion templates.
That would be the wrong thing but it's part of a viable, important, healthy ecosystem.
Fraser: I get it. And Chris's word was atrocious when he was [00:32:00] talking about the templates. I actually think it's thoughtful how you've integrated them and Like they, they do give Nabeel the power and I don't have to deal with it.
Like, it's just like out of sight, out of mind for me. And he's, he's able to like get it to the point where he wants. And that's probably the right split, is I don't ever come across it. And then Nabeel has it for when he
Nabeel: needs it. Yeah, we just want more AI products to do that. Can you talk to all of them, Chris?
Can you talk to all of them?
Chris: I mean, I don't know if it's a winning strategy. I mean, like, we'll see, we'll see how it all plays out. I do think that people like granola, like it a lot. I think, I think that, that, you know, You, I mean,
Fraser: You have, you have taste and you've made then a lot of like opinionated decisions.
Which is like the combination of those two things generally works out pretty well. One big decision that you've made is you don't show the transcript, which the first couple of times that I used the product I thought was, was. Weird. It was weird. Weird is maybe the word. You can't see
Chris: it. You know you can't see it, right?
Like most people, a lot of people don't discover this. Like you can click on the little dancing bars [00:33:00] and then you can look at the transcript like real time during the meeting. That's yeah. Can you see it post fact though? You can see it post. It's also, it's just hidden away. Like, I think the reality is like transcripts are really long and unwieldy and like just a pretty crappy format for digesting information.
So Yeah. Yeah. It's there. It's hidden away on purpose. But what people do want to do sometimes is like, they go back and they're like, Oh damn, wait, really? Like that's, they said that, like, what was this point? And like, at that point you want to be able to like zoom in and be like, did the transcription mess up, you know, and like completely misinterpret what was said or like.
Is the LLN messing up or is like, okay, is this like, is this real or what, what was the context around that? I think it's super important to be able to do that, but like full on transcripts, not, not super useful.
Fraser: I have a question on process as I've listened to you on this call. So you have clearly this big vision, right?
You're like, oh, no, the note, forget the note taking thing. This is [00:34:00] just the act one to get us over here. You have all sorts of like short term requests, probably flowing in from end users. Like, how are you prioritizing where you're spending your time and what's getting done, what will get shipped in the next three months versus what's pushed out for a year from now?
Chris: Oh man, it's, it's tough.
Granola's Initial Launch and Challenges
---
Chris: Like, uh, it's funny. I, I, uh, when we built the first version of Granola, there's basically no post meeting anything in Granola, right? It just generates the notes and then there's the chat. So you can use the chat, like generate actions if you want. It's not, it's not great yet, but there's nothing really useful post meeting.
And I was like, okay, we're going to launch that. And then we're going to get to the post meeting stuff like right away. And it's been, I don't know how many months since we've launched and we haven't touched that just because there's like so much to just to do on the basics and make it really good.
I think it's tough.
Balancing Product Development and Market Fit
---
Chris: I'm curious what y'all think on one hand, what we have, the people who like it, really like it, it's growing, you know, it hits [00:35:00] a nerve. On the other hand, we're in the advent of like the latest generation of AI means all this new stuff is now possible and it's going to happen and it's going to happen quickly, right?
And like granola is like 3 percent of what I hope it will be, you know, not that long from now. And I think, I don't know. I feel like traditional product wisdom would be like, you have, you have product market fit. Like just, just do that. Don't mess it. You know, like you try to build more products.
There's like almost as much of a chance that you mess up the thing that you have than if you, that you make it better, you know, like, like more features oftentimes not going to make it better. But in this space, man, like, I don't know, people ask me about competition and like who I worry about. Like I worry about.
a startup launching tomorrow more than I worry about the big companies personally, because I think you like the stuff's evolving so quickly. And I think like AI native products built on like these new building blocks will like look and feel very different.
Where Granola is headed from here
---
Chris: So I think it's tough [00:36:00] in terms of what we prioritize.
When we launched, we had four people on the team, which was like not enough, given what happened in terms of growth. So I think the first six months after launch, you're Digging ourselves out of product engine company debt in terms of just being super understaffed for the volume of what was happening.
And now I think we're going to try to split it kind of 50 50, right? Like we have a lot of companies coming to us being like, Hey, we have a bunch of folks using granola. We'd like to, the whole company to use it, or, you know, and there's not many to do there to just kind of like mature and become more, more sophisticated at the same time.
I feel like the product we have today, if it doesn't evolve quickly, it'll feel outdated and obsolete, like, I don't know, in 12 months. Like, I think both of those things are true.
Nabeel: If you were starting granola from scratch right now, given. Just internalize for a second, everything that's going on, what would you do differently?
What would it look like if you were trying to kill you?
Chris: It's a great question. I [00:37:00] feel like I should force myself, we should ask ourselves this question every month. Okay, here's, here's my answer.
Like, okay, what are the big things that have have changed? I'd say since we launched criminal, right?
I, I think I see two directions. One is models have just gotten way, way, way smarter . If you're looking at like a one level intelligence . And the second one is multimodal. At first it's like images, but you can imagine like real streaming, real time video and audio into the model, right?
The multimodal one is, is interesting. I thought it would have a bigger impact or like you could stream the entire meeting into an LLM. And we haven't done that yet. And I think there's probably a product to be done there. It is interesting though, because at the end of the day, like a, like a text editor as the interface between the human and the AI.
Is a very familiar, very powerful kind of like high precision interface. So like, I think that even though I think multi modal was coming, things might still look like text editors for, for a very long time. I think if I were trying to [00:38:00] kill us, I would maybe try to leapfrog the, the notes part a little bit.
Like notes are important, but basically it's like, what's the post meeting, post notes, artifacts, stuff. I would, I would maybe focus on that. Like, why did you have the meeting? Right? What was the point of the meeting? What's the outcomes that you want? And just try to do that really, really well. Because I don't think anyone's doing that.
I think there's a lot of unexplored, really powerful stuff to figure out there. The approach we took is like, we have to get your trust to get in the meeting. Right? Like we need to be useful off the get go.
So that then we can do XYZ with all this context and like, like make your life easier not to do this work, but maybe there's an alternate, maybe you can just leapfrog, but I don't know.
Nabeel: Fraser what else has changed? What are the kind of like tools in the tool chest we maybe didn't have a couple years ago and also a little bit looking forward. If you're planning, you're planning for the next year .
Fraser: Where my mind went as a user of Granola wasn't toward like the technology that may be coming.
It was more around [00:39:00] how I use it and where I use it and also where I don't use it. There's a big part of my life where I'm still turning to the pad and the paper and the pen Where and it almost feels like you show up and you're like, nope, you can't use it right now Sorry, put down the pad and paper and the pen with this tool and go go use.
Nabeel cover your ears go use apple notes because like This isn't a meeting and you need to like, this is a product for meetings and we've optimized for meetings and any other time that you want the paper and pen metaphor, uh, I'm, I'm there to tell, you know, I like that. It's the, if it's a tool for thought, then what are the other places you're thinking? That's it. That's it. Like I, I still have, it seems weird that to prepare for this call, I have an Apple note document. What a weird word open filled with my ramblings. To show up with and that's a bifurcation of a use case that I feel like you want to own.
Chris: You can ask a totally different question, please [00:40:00] Yeah, it's tough.
Vertical AI vs. General Assistants, who wins where?
---
Chris: It's a tough space to like look like five years from now and be like, you know, what? What would who are the winners going to be? What are their characteristics and like I guess like a the uh a question that We got a lot.
We still get, we've always gotten less now for some reason, but we got a lot earlier. And I, I still think about it is what do you think is going to in the future? Like, what do you think is going to be owned by a general assistant versus a specific solution and vertical, like AI powered tool?
Does cat PPT or, or Claude, do they. You have one, you know, for your whole life, your personal life, your work life, do you have 50 is that even the wrong mental model? Like what? Where's that headed?
Fraser: Well, Nabeel's been on this journey with me over the past couple of months where I came back and I told him that even I was guilty of underestimating how broad in general, the general products were going to be.
And, you know, where we came out in past conversations was like [00:41:00] things where there's high utility, but low frequency of use. These broad horizontal layers are awesome.
I can send it my health information the once a year or every other, like, you know. Quarter when I have it and it's just fine. It's just fine And so then the question is like I think there's going to be a very small number of them I think it would be weird if I have one for both my work life and my home life That seems strange.
I don't want the skivvy people in privacy and security at work to know my, personal assistant information . I think that we've seen enough history that people want to separate those worlds. Then where do we go? Like, Nabeel and I landed on the idea that if there's broadness in work, then you could see a horizontal work assistant carve out some space.
Nabeel: I think we've probably had at least Three or four hours of the last year of us on this, on this podcast, [00:42:00] Chris, having some version of that conversation. I think literally maybe our second episode was like, how do you not get hit by the tidal wave of chat GPT? Like, where's safe ground?
Uh, where are you? How do you build on the S curve of AI? It's probably a very frequent theme that probably will. I'll be itching to listen back to the way we're talking about it now in two years, frankly, and in the many ways we're probably quite naive about it and how we'll talk about it. I tend to try and pick these as accesses
and so I think frequency is a, is an access that Fraser just brought up that I think is worth thinking about how, who wins is going to be on this graph if I only do it once a year. Then yeah, I probably use a generalized tool the more I move towards every day, the more I'm likely to use a specialized tool.
The Wisdom of Experts Era
---
Nabeel: That's obvious. That's clear. I think the other one is that framework I've been trying to think through lately is Bo, both you and me, Chris, as, as founders, like started in the the web 2.0 era ish stuff and the wisdom [00:43:00] of crowds era.
Mm-hmm . And I, I think a lot of what this is distilling down to, if we just get rid of AI or all these other things that this latest instantiation is really like the wis wisdom of experts era. We are taking PhD level knowledge about a thing, encapsulating it in a model. And then letting you as a consumer access it.
That's kind of what's happening. So why is code gen good code? Gen is good in clod not because we took all of the cat poems on the internet and we ingested it Into an llm and made it smart. It's because we actually took really well written code and we ingested that in the llm We took experts in that And we are now trying to distribute that expertise to everybody.
It's not always at a PhD level. It's not writing at a PhD level yet, but you can imagine that. This is what the large LLM companies are doing now they're paying. PHDs by the hour to, like, fill out math equations and put them into the LLM over and over and over again, again, to try and build that [00:44:00] expertise.
And so I think the other access here is, is the thing that you are doing every day, if we take the frequency graph, is there somebody else in the world who's incredibly good at doing that better than you? And is that pattern of behavior insatiable in the model? And if so, then it will probably bleed itself into some type of user interface for some way for you to use it.
That's the other axis I think about. And I don't know what that means for where the way granola like navigates itself, but I think it's true in every category. There is probably somebody who is better. Who has the same thoughts as me, but is coming out of a meeting, but has a 10 times better efficacy at like recording the nugget of the idea that they just came up with, then like as a habit taking as you get back to it, like taking action on that idea and executing on it and making it happen and what does that mean for who I should speak to or how I should come back to it or what rabbit hole of research I should go down to like all that stuff.
And that. [00:45:00] Like capturing that wisdom of an expert and trying to make the model nudge me to be just a little bit better in that direction, that's going to feel like the superpower.
Fraser: I've been listening to Nabeel and I have a slightly refined take on what I shared earlier. And that is, I feel pretty confident that there will always be a space for you adjacent to those broad assistants, Claude and Chat GPT.
And here's the reason why. I think maybe last time Nabeel and I made a recording, Nabeel laid out the framework that. There's some products where if you look back over the technology arc, new versions have arrived, like IRC became Slack and Discord, and there's a whole bunch of products that have that arc.
They're durable like your interface goes back to rock on wall, you know, right? If people scribbling stuff, you held up a pen and a paper, I think the fundamental interaction of taking notes is dramatically different from the [00:46:00] fundamental interaction with a, uh, a chat assistant, right?
And then I also think that there's always going to be a place for somebody to take what's here or here in this conversation and bring it into a world that persists.
Chris: Related but also unrelated.
Where are we going to end up with wearables? AI, AI pens. Wait a
Fraser: second, wait a second, wait a second, wait a second. You asked us a great question. And, and you, you told us that you've been thinking about that for months because you've been asked. And now you get asked about it less and less. But you must have a great thought.
So what's your opinion? I
Chris: think that Can you move the time horizon too far down the line? It's like, who the hell knows, right? We might, we might be like fighting with like, what's it like sticks and rocks? Like the next one we fought by sticks and rocks. Like it's hard, it's hard to imagine what, what that looks like.
But I think for the foreseeable, maybe a different access is there's this idea of like a power tool, right? Like a tool for an expert and [00:47:00] like an expert situation. There's iMovie, but there's always going to be like Final Cut or Avid or whatever there was before and this idea of like, if it is important in your job that you are extremely, extremely good and efficient at some task, then your tooling will pop up to support that.
And I think that there's an access there where the best X, Y, Z people in the world at something will use specialized tools. And I think there's a. For us, there's a question of like, are meetings or knowledge work like one of those or is it like in a generic, uh, in like the more general category? I think there's enough, at least for the next couple of years, specific to the workflows around meetings.
The biggest thing you don't realize about meetings is like people are back to back. They overrun, which means that they have zero time to do anything with the notes or anything between meetings, and that the hyper efficiency of getting from one context to the next context is actually incredibly important, right?
Like dumb things like opening the right zoom on the right place [00:48:00] on your screen and having a place where immediately you can just start like chatting or whatever, like those things, or like, I should be able to take my headphones off halfway through and switch to a different, like, oh, this zoom link didn't work.
So I'm going to go to a different zoom. Like there are all these things that happen in like the real world that , if you don't, if you haven't been building in this space for the last like two years, you wouldn't necessarily appreciate, but like all these paper cuts get in the way. And I think there's enough about that, that specialization really matters.
Again, in the short term, like if you, if you zoom out, people are like, You know, UI is not even going to exist. It's all going to be dynamically, generated in the finally, maybe, I don't know. I, I, I, I think that's kind of farfetched personally, but I think also like as a human, there's something nice about predictability and knowing that this button is going to be here and not, you know, like a, a redesign on the fly for me.
So I think again, for a little while, the specificity of the use case and all the like thorns around that or paper cuts or whatever you might get nicked, there's a lot of value there for sure.
Nabeel: There's no way the UI goes away. We're like, we're in the MS DOS era of, of AI. And, and there's a reason. [00:49:00] Yeah. All of the proponents of the UI less world are, have, like, never tried to design the interface for somebody to explain to them.
Sometimes you need to know what the two options that are available are. Like, I'm just going to speak to my speaker. Try, try figuring out the available apps on your Alexa without using your phone. Like, it's insane. The reason you want to read a menu at a restaurant instead of having the restaurant. read a menu to you and listen for the next 15 minutes.
Chris: The thing that's less clear to me is like, so right now, granola is very much like one meeting at a time. That's how you interact with it. And we want to make it feel so much more like you interact with a set of meetings.
Down the line, you, you jump into a meeting. You'd like granola to prep you like the best, like chief of staff in the world would, right? You get like a dossier. You're like, here's how you're going to be really, you need to know. And here's what's really important in this meeting. And you can see how like, Oh man, if it, if it doesn't have my emails, like it's going to miss out.
It's like, there's something really important on that email or before this meeting. I need to know. And then you're like, okay, well. So granola should have access to my emails, but, but what, what else should I have access to? And then you're like, okay, am I granting [00:50:00] granola access to like my Slack and my email and like all these different services?
And am I doing that for all these different slightly vertical AI agents? That that's the part where it's like, is that getting replicated over and over or, or not? Or like, do we have to win? Like do we have to be the one like knowledge work core, like AI assistant or can there be five five? And that's actually preferable and better for the user.
Like that's the stuff that's left. Leslie to me,
Fraser: my bar to try a new product is exceptionally low. Like I said, great joy to go and tinker and explore. And then as Nabeel knows, my bar to stick with the product is exceptionally high. Maybe like the trite way of answering your question is like the idea that I use Chachapiti and Claude all day, every day.
And I use granola all day. Every day is the starkest evidence for me that there needs to be two different products here. Yeah. Like I just, I would have no patience for it. Like, I'm not looking to add complexity to my life just to have new products come into it.
AI passing judgement
---
Nabeel: I was reflecting on Chris, your earlier comment about what is different about the world of AI [00:51:00] and what's going to change over time.
The other one is at what point of context, the really good chief of staff, Is assertive and kind of knows your weaknesses and they're not just giving you an information dump.
They have judgment. Yeah. And so I, I think when do you cross over from utility to something that has judgment is also a thing that I think AI products have taken a stab at a couple of times, often like the family therapy kind of situations and so forth. And there's constrained areas you see a little bits of it.
But I, I would suspect with enough context, you should be saying, you're about to go into a meeting. By the way, you tend to ramble on when you talk about the subject, like keep it short, dude. There's that measure of, I, like, I fed in, my father had been working on a manuscript for a while.
And uh, I fed in that manuscript to Gemini and to ChatGPT and I asked the normal sets of questions about like, give me feedback on the whole thing. And, you know, it's fine. It's okay. I fed it [00:52:00] into Claude. And then inside of that artifact, I fed in the authors that he really likes and enjoys.
Versus just saying, you know, the right, like, Paul Graham stuff. Like, I actually fed in some manuscripts from authors that he really enjoys. And then had it give him real feedback. And man, if it wasn't, like, incredible. And that's a mixture of the context setting of other writings. It's also the mixture of just Claude being better at this kind of stuff.
But like, it was really, really good. And it was a moment over Christmas break that I was like, Why haven't more people realized that this is actually doable today in a way that it was? It can't, it still can't write amazingly well, but it can pass judgment pretty well with the right context. And it doesn't do that in many products.
Chris: There was an example on Twitter where, Sam Altman released a statement and, like, underneath someone had, , a question, like, provide a critique from a PR perspective and break down the statement into Claude and it was pretty eye opening. It felt like you were talking to, like, a, you could imagine, being in a PR, like, war room, and, the, tactics they might be using and breaking it down through [00:53:00] Which, you know, I'm not from that world, so at least for me, I'm like, oh, wow, here are these methodologies I hadn't heard of that may or may not have been used, but look like they were applied in this statement.
Nabeel Hyatt: Yeah, like at some point,
Nabeel: I want granola. It's going
Nabeel Hyatt: to know my patterns of speech over the course of years. What are the things I'm talking about now that I wasn't talking about two years ago, what questions should I ask going into this meeting that I wouldn't think about that prep can be deep and nuanced and intimate in a way that I think is almost impossible if you don't have the context that you're getting, you're right, and maybe you need emails and slack and everything else as well.
But that feels like a magical next step. I'm looking forward to it.
Fraser: Earlier I said that, you know, that's a feature that is five years out and I hope it always remains five years out. That's a feature that's like a year out. And I hope you ship it in a year. Cause that would be awesome. That is not the one for tomorrow, but that would be so great.
Wearables role in AI
---
Nabeel: You had a question about wearables you wanted to ask. Did you want to jump in? Yeah,
Chris: where are we going with , I feel like every couple of weeks there's a new AI wearable and like part of that feels like it's [00:54:00] on some degree inevitable, but also like even for someone at the forefront, I feel like a lot of this stuff still feels kind of dystopian today.
And like, you know, I keep questioning myself, am I getting old or is it like, oh, it's not quite the time or the form factor or whatever, you know what I'm saying? I'm, I'm curious where you, where you land. Cause I mean, your jobs are basically to predict the future, right?
And like the timescale of that, of that future. No,
Fraser: no, no, no. I mean, our job is to meet people who are predicting the future and then just try to adjudicate on, on whether or not. Which ones are. Yeah. Okay. Yeah. Which ones
Nabeel: were like, what side of crazy are they on? Our job is to listen. Uh, I mean, I know that we're talking a lot on this podcast, so we're all just sharing because I think of this as an active conversation more than a pontification.
That's what I hope us chatting here is. It's also just an excuse to hang out with Fraser and other good people like you.
. I have this wearable. Oh, which one's that? This is Plaud, P L A U D.
I have this. Wearable. Uh, this is also, this is their other, this is their non pendant [00:55:00] version. And if I walked around the corner and spent a minute, I could just drag out another six or seven other wearable devices. It seems utterly inevitable. I think there's a really simple question of input and output modality.
There's a really simple question of battery life. We'll drive it. I don't want my phone on all the time. If we get over the privacy concerns, a lot of stress stuff, like I worry about my phone battery, like, and if I want this thing to listen to everything that's going on in my life, so it has more context than I need to, for the very simple world of battery, I do expect it to be a different product.
So yeah, I, I'm very bullish on, on wearables. I think the value back has not really been delivered. So the reason you feel. Iffy is that you haven't really gotten real value out of using one of these yet. And if you do, then I think you're on the other side of it. If there's a job to be done and it's doing the job, then you're happy.
I had a really interesting, fun, long conversation with my [00:56:00] son a month ago about like what he wants to do in the world and the things he has. And I happened to have the applaud on my jacket.
And we were, you know, there was no taking notes. There was no pulling out my phone, being able to tap this really quickly and record that. I would love to have that conversation in 10 years, right? What a wonderful and amazing artifact that will feel like photography, right? To me, that just ages well over time and gains value over time versus losing value over time.
I I don't know.
Fraser: I mean, how, how I'm not adding anything to that beautiful soliloquy like that. That's it. That's it. Like the capture piece is going to be like photographs.
That's a great framing. There will be cherished things, just like we use photos today, like we use it to communicate. We use it to capture and remember we use it for nostalgia like that.
Nabeel: Yeah. We also use it to take a picture of the parking sign for utility , I guess
Chris: use if you, if you use [00:57:00] the photograph. I mean, it's.
Snapples and oranges, but like, I don't disagree with anything you said. I also don't have a clear, a very specific vision of how this is going to go down. So this is like more of the, in the sense of exploration and discussion. Yeah. When Snap launched and your photos disappeared, That was a hard thing for people to wrap their head around as being a positive, and it absolutely changed the way people interacted with it and what they shared and how they felt.
And I think there's something similar here, or there's some parallel, there's some point that we should bear in mind. And maybe we normalize to it, maybe you and your son have the exact same conversation if you know you're miked 24 7 and, you know, but maybe not. The human organism. And society, like, reacts, like, you know, it's like antibodies to certain things .
Fraser: But I think what you just described is like the, the importance of great product [00:58:00] and taste, right?
We adapted, but snap, like unearthed something beautiful. It doesn't mean that all of the other ways with which we've used cameras have disappeared. It also doesn't mean that we like Totally have changed our behavior. We've moved and we've lurched forward in lockstep and the technology is adapted. We've adapted the products have been crafted to meet us there.
I think the same thing is going to be here.
Nabeel Hyatt: I have no doubt about that. I think that's good pushback, Chris. We should be careful to say that we boiled down wearables for the purpose of this short discussion to recording audio. And of course, wearables have lots of other modalities and things that they do. But to pull on that thread for a second.
There's this guy, Ivan, uh, Ventrov, who wrote an essay, just a couple weeks ago. Chris, if you haven't read it, you should. Like, I shared it in Slack at Spark yesterday. It's called Shallow Feedback Hollows You Out. And it is ostensibly about the feedback loop that you just talked about, but applied to kind of writing generally.
Why does somebody [00:59:00] who is a really great and an interesting thinker, why does it feel like in this world now they, they pop out into the world, they have a brilliant insight. And then if you just watch them and his phrasing basically is like thinkers co evolve with their audiences. And so because they are now in this very tight feedback group with their audiences, they sensibly become crazy people over the course of the next decade that like their audience just gives shallow feedback because the average of a large audience isn't thinking that deeply about the thing.
And then because they're co evolving with their audience, that original thinker becomes shallow over time instead of becoming deeper over time. And I think that I. I want something to record everything that I say, don't know that I want access to the transcript or the audio of all of that thing. I want something to record everything that I say so that an [01:00:00] LLM can get smarter about me so that I can get smarter about me.
I want it to be my augmented memory that's very different from I want it to be used in a social context. Like, I think we should be very careful. about the things that we introduce to others in a social context. And so I agree. We are incredibly social creatures, man. There's no way we just, if the recorder is on all the time that we just act exactly the same way.
Social Dynamics and Privacy Concerns
---
Chris: And I think what's, what's tricky there, just imagine there's a device and it's very clear that the audio recording transcript is not accessible, but there's all this like.
Upside value, right? And I think everyone could feel good about that. There'll be other devices that look similar that don't do that. You know what I mean? And I think that's where the social dynamic is like, oh, is just like the compressed version of the, of the knowledge of the content or queryable or what have you. Like what's the, and the social contract around that is what gets,
Nabeel: is unclear to me. I think the social contract's important. I think, is this thing going to be used or can it be used in what context is really important.
Like I'm the one who had, Chris, I had a really [01:01:00] visceral reaction. I emailed you after trying the beta version of the mobile app for Granola. That like, just watching the little transcript thing bounce up and down. And thinking about it, recording words, like, I don't like it. It's like, don't do that. Just, just let me open it up and start my meeting.
Chris: Funny story about the Granola mobile app. It's not a public one. We're working on one. The engineer who's working on it, his name's Jonathan. He got engaged over the holidays and he turned on the app on his phone and in his pocket while he proposed. And the notes are great. Like they're really funny, actually.
Like I'm going to, I've been asking if we can tweet it, but it's like, it's actually a really cool record to have of that moment in a, in like a weird way. Anyway, I don't feel like neither here nor there, but there is like kind of going back to that conversation or just something like that. There's something kind of magical about being able to go back to certain moments.
Fraser: It's not a here nor there, that's totally apropos to the previous comment, right? That's a magical moment captured for them now that wasn't previously done. Yeah, I
Nabeel: wonder if [01:02:00] there's a secondary record button. There's the like, it's always thinking and it's always gaining context. And then maybe what you really need is another button, which is actually a record button, which is like, okay, this time, this moment, I actually want the audio and the transcript of this part.
Chris: I actually agree on that. I actually think we are missing a term or a verb for it.
Nabeel: I can get my head around that. Yeah, for, for me, it's very related to the, if an LLM reads a document, is that a copy of a document or is it just learning from the document?
The Need for New Norms and Labels
---
Chris: With all this, I just think there are new norms that need to be established. And I think we're still in like the things like I'm taking form, like needs to take form a bit more and then he needs to be labeled. And then there needs to be norms that evolve around those labels. Right now, it's just like this amorphous, it'll be interesting though.
It'll definitely be interesting. I do think until we have real utility that comes out of these. I think it'll be new because no one will care the moment they become really useful. I think that's when it gets interesting. That's like, [01:03:00] then we're off to the races.
What else could be Granola-ized
---
Nabeel: So let's use that as a kind of last standing up because I know we're over time or like we've talked about what we feel is good and magical about granola that it like augments a human behavior.
We talked about the fact that that behavior being frequent matters a lot and it was matters a lot. It's you were thinking through this process. Is there. Something else is for you, Fraser too, as well. Is there something else in your guys lives that you wish was granolized? And is there something else, a pattern of behavior that you guys do in the rest of your lives on a regular basis that You just wish you had augmented.
You just wish you had, you know, not a chatbot co pilot, but somebody who kept you in flow.
Anything for you two?
Chris: Well, what came to my mind, and I actually don't know if it's a good example, but I wish there was something that would be better, is my calendar and scheduling.
And, and the reason I say that is because on one hand, You could imagine a really smart [01:04:00] agent doing a lot for you, right? Like bringing best practices in and moving stuff, like defragging or whatever people who are world class at this do. And, you know, looking down the line and realizing, Hey, there's like a, you're not going to have enough time a couple of weeks from now.
So whatever, you're not reaching stuff. But yeah, conversely, it'd be incredibly frustrating if you couldn't go in there and like change every little detail about any meeting yourself and you need to have that control. So I think there's this. A little bit like with granola, you're like, you have to give the ability to lean in as much or lean back as much as you want.
I think you'd need that same ability with a calendar app.
I think there might be something there.
Nabeel Hyatt: I'll give mine. And it might be related to Granola, it might not. It's going to sound really basic because it's another one of these IRC becomes Discord and it's a forever thing. I think project management. Like the asanas of the world, there is always some version of I'm trying to break down tasks. I think it's different than personal to do's in a work context.
Those things tend to have dependencies. I tend to not [01:05:00] be able to think around every corner. It's always a question of wording. You know, I used to do a thing that I didn't do anymore, but right after we would invest with a company, I'd sit down with them and we'd go through their annual goals and I do a lot of rewriting of the language with the founders to make it more it explicit.
You know, there's like, there's like 20 rules, 30 rules of best practices for doing good structured project management inside of an org, any group of people trying to get anything done.
And I think it's a world where I don't think you need me to type in what the project title is. And then you filled out a thousand little bullet points with a Gantt chart. It's more like, let me organically make something. I can't even imagine what the UI would be right now. But like, let me organically make something and then start to augment that behavior, build out that behavior.
Help me think around dependencies and corners as I'm working on it. While keeping me in flow, that's a product that should, that should exist. That'd be great.
Fraser: [01:06:00] Yeah, I agree. Fraser you got anything? Earlier you said that we don't predict the future, we listen, and it's not a case where I'm going to listen. Nothing's coming to mind.
Nabeel: That's great. Well, if anything comes to mind later, you can always tweet it out or email me.
I'm around. Awesome. Chris, thanks so much for joining us today. Thank you so much. Yeah,
Chris: we'll listen back and think about how stupid all the stuff we just said was. But thank you so much. This was fun.
Nabeel: Always done with humility. All of this, always done with humility.