AI First with Adam and Andy

In this episode of AI First with Adam and Andy, author and tech thinker Steven Johnson shares the surprising origin story of Google’s NotebookLM. Learn how a writer’s personal obsession with note-taking turned into a viral AI-native product now transforming enterprise workflows.

What is AI First with Adam and Andy?

AI First with Adam and Andy: Inspiring Business Leaders to Make AI First Moves is a dynamic podcast focused on the unprecedented potential of AI and how business leaders can harness it to transform their companies. Each episode dives into real-world examples of AI deployments, the "holy shit" moments where AI changes everything, and the steps leaders need to take to stay ahead. It’s bold, actionable, and emphasizes the exponential acceleration of AI, inspiring CEOs to make AI-first moves before they fall behind.

Forum3 (00:02.396)
This is AI First with Adam and Andy, the show that takes you straight to the front lines of AI innovation and business. I'm Andy Sack and alongside my cohost, Adam Brotman. Each week we bring you each episode, we bring you candid conversations with business leaders, transforming their businesses with AI. No fluff, just real talk and actionable use cases and insights.

I'm so excited for today's episode. Welcome Steven Johnson from Google. Steven, super happy to have you on. Thank you for making time. I think today's episode is why every exec will soon rely on an AI notebook as a second brain. That's why I want to frame the episode. And I thought Steven would be the perfect person to talk about that.

Steven, can you give us a little bit about your background? Introduce yourself.

Steven Johnson (01:29.324)
Yeah, well, I've been a friend of Andy Sachs for something like 35 years, incredibly long time. Yeah, yeah, that's true, that's true. But in addition to that extraordinary accomplishment,

Forum3 (01:38.96)
Not counting the last track.

Forum3 (01:47.428)
And that is an accomplishment,

Steven Johnson (01:47.502)
I most of my career as a writer. did a couple of startups over the years at various different points. But most of my career, I've written books and done kind of spinoffs of that. did a couple of TV series and podcasts and things like that. I've written 14 books on nonfiction, history of science and innovation and a bunch of different topics. But I had

kind of always been very interested in using technology to help me with the writing process and the research process, as you say, as kind of a second brain organizing my ideas. And it's partially because of that obsession that I ended up at Google.

Forum3 (02:33.084)
So we have professional author Steven Judd. Steven, I'm just curious on your author side, what is your favorite of your books?

Steven Johnson (02:40.322)
Yeah, it's a little like which is your favorite kid which is you do have an answer but it keeps changing so You get kind of sick of some of the books but like I have I write kind of history books that have multi-layered but kind of narrative structures and then I have idea books and so of my idea books, there's a book I wrote called wonderland about the history of play and

curiosity and how it's been a major driver of innovation over the last 10,000 years. That's kind of my favorite idea book. the narrative ones, I don't know, the last one I wrote, The Infernal Machine about dynamite and the birth of terrorism and the NYPD. I'm pretty proud of that one.

Forum3 (03:30.706)
All right. We're giving you, we gave you two plugs. We've got one more in and, and what was the second book that you just sent in? And we'll get, and now notebook LM. most people, mean, most of our listeners, think no notebook LM, but can you just give like a short intro to notebook LM?

Steven Johnson (03:32.844)
There we go.

the infernal machine.

Yeah, I love it.

Steven Johnson (03:50.84)
Yeah, I mean, we think of notebook and actually AI first is a really great phrase to use for notebook LM because we really built it and I can tell you the backstory of how we came to build it, but we were really trying to build an AI native application, I not add AI to an existing word processor or add AI to an existing, you know, side.

creation tool, but really trying to think about like, what does software look like when you start with the premise that there's going to be a language model at the center of the application from the beginning? Like surely it's not just a chat bot. Surely there are other UIs that you can create. So we think over time, we've come to think of notebook alum as a tool for understanding things. That's fundamentally what it is. Everything in the application is designed to help you make sense of the information that you need to understand to do your job.

And so all of the interactions this is the kind of like core Principle that is was there from the beginning of the product three years ago when this was a very unusual idea It's now become more common all of your interactions inside of notebook LM are grounded in the documents in the sources that you give it and so Every time you ask a question the model will only answer based on the sources you've uploaded And so everything is designed to help you understand the content

Steven Johnson (05:15.95)
of those sources and use that understanding to create things.

Adam Brotman (05:21.712)
Yeah, by the way, I'm so glad that we've got you on. mean, Steven, I had no idea you knew Andy. I we actually, I sent him a tweet you had sent or something. And, you know, because I send Andy these tweets all the time on X and he's like, my God, I know Steven. We should have him on the pod. So it's really great to meet you. You know, can you talk about how...

how it evolved, right? So you just described, Andy asked you to describe Notebook LM, like, and for those that haven't used it, it's, you know, maybe you could describe a little bit of how it works, but you know, like, how did it, so you mentioned like, you wanted to create an AI first, you know, knowledge assistant application, like, you know, how did it come about? Like, I mean, you know, given the origins,

Forum3 (06:09.552)
And let me add, Adam, let me just add Steven, go way back, like go way back to 35 years ago, because I remember, I remember, like, go bring up, bring our listeners along that whole journey.

Steven Johnson (06:18.647)
Okay.

Steven Johnson (06:25.902)
5,000 years ago, Egyptian scribes first created, actually writing, I'm working on a history of like note taking and notebooks actually. So I am actually, I could go back 5,000 years, but I will not because we only have 30 minutes here. So yeah, I mean, I think that the story that dates back to when you and I first met, Andy, probably this is, I would say a year before we met, we were both in college together. And in the sophomore year of my,

college experience in 1987, Apple released a program called Hypercard. And Hypercard was, I describe it as like the velvet underground of software. it never became, it never had any like hits. It never became like a huge hit, but it influenced the whole generation of people who went on to create other software. And it was this kind of pre-web hyperlinking personal knowledge management tool that Apple, was.

allegedly going to be Apple's next big thing after the interface, but it never really took off except for some hardcore people. And I used it to build a tool to manage all of my notes and research for my classes at college. And I called it curriculum. And it was like, I had this like idea that if I had software that could help me like keep track of everything, I would be able to learn more and I would be able to understand the material more.

Richly write better papers, all the things you want. And I got so obsessed with like building this tool that I like stopped going to the glasses that the tool was ostensibly trying to help me perform in. So it was a real, it was like a first taste of what was possible there. And it actually set me up in a way in a very important part of the sense of my career, because when the web came along kind of six years later, I was able to kind of.

understand the significance of it in a way because Hypercard had prepared me for it. But it also just sent me on this journey of knowledge management tools. And I, through the years, I used a software called Devon Think, I dabbled with Evernote and Obsidian and all these tools. I was a major user of Scrivener, which is kind of a knowledge management and writing tool. Just like whatever some new kind of way of manipulating information and storing information came along, I was

Steven Johnson (08:50.574)
I would try it out. And I wrote about the way that I would use these tools. So I would write blog posts, medium posts, sub stack posts about how I use software to help me think and organize the ideas for my books and stuff like that. So I had a little side hustle as a early adopter and evangelist for these knowledge management tools, second brain tools. And then in the spring of 2022, so, you know, kind of seven months before

chat GPT. I wrote a 10,000 word article for the New York Times Magazine, mostly about GPT-3. But it was a little bit about some of Google's models. And it was basically saying, hey, people, this is the real deal. Like, yes, there are problems with these language models. Yes, they hallucinate. Yes, they are unreliable in some ways. But this is the most significant technological transformation of my lifetime.

what you cannot do is just say that it's hype. This is something that's going to be absolutely a paradigm shift for how technology works and take it seriously. And that piece was the most controversial piece I've ever written in my life. Like the pushback on Twitter was unbelievable. Like all these people being like, he fell for the hype. He doesn't realize they're just stochastic parrots. This is so sad that Steven Johnson, you know, so naive. And so I had a miserable time.

when that piece came out, even though I'm very proud of that piece. I think it has, you know, the test of time. Yeah.

Adam Brotman (10:23.184)
Wait, let's just pause for a second. You wrote a 10,000 word piece for New York Times Magazine before the chat GBT public release saying everybody, these LLMs are kind of a big deal and it's the real deal. That's pretty significant, man. That was pretty prescient and you obviously were spot on. And it is interesting that you got pushback, but I just want to pause and say like, that's a pretty telling sign that you were on it.

Steven Johnson (10:37.485)
Yes.

Forum3 (10:53.98)
Well, I'll add somewhere in there, Steven, like we had, you you and I, we've known each other a long time and been friends for a long time, but we had lost touch somewhere along there. had the, was shortly after that article that you were, we had dinner at our mutual friends house in Brooklyn and, and you were talking about the article. So I think you, and you had not yet joined Google. You were talking to Google.

Steven Johnson (11:20.17)
Yeah, yeah. So basically Google. So the one good thing that came out of that article besides the reason why all the abuse was worth it online is that Josh Woodward and Clay Bivore at Google. Clay had just started labs. He's since left to start Sierra, but he had spun up labs inside of Google and Josh was working with him and they were both longtime readers of my stuff.

They'd read my book, Where Good Ideas Come From. They were reading my substack and they'd read this article. apparently, Josh, Clay turned to Josh at some point and was like, Hey, I wonder if we could get Steven Johnson to come and give like an inspirational speech to the team. And Josh was like, I wonder if we could actually get Steven to come and build something. Cause they had this ethos inside of labs that they were thinking about of co-creating with outsiders, like bringing people, if you're going to make a product for writers, like bring a writer and have them in the room from the beginning and not just, you know,

do focus groups with them or UXR. So they cold called me like, mean, Clay sent me an email out of the blue and was like, Hey, you don't know me, but, I got a crazy idea for you. And they kind of pitched me. said, Hey, come, we've got a small team and like, you know, a couple of engineers, a designer, um, and we can just prototype something and you can get access to our latest models and we'll see what happens. You know, can come 50 % time. And so I said, yeah, that sounds really fun. And.

I genuinely thought that like we would build some interesting prototypes. It would be a fun adventure. I'd meet some interesting people. I would get access to these models, which were at that point behind closed doors. And I would learn a lot, but I kind of was like, Google must have a thousand prototypes that they're building at any given time. Like, you know, nothing will come of this publicly, but it'll be a fun ride. And then.

Basically, we built this early prototype in October of 2022 that had source grounding, was the first thing that we kind of built into it. You could upload your own documents. I was taking my quotes from books that I'd read, which was always my obsession as a writer. So it'd be like, OK, I've taken all these quotes from my research and I want to upload those. And then I want to be able to ask questions about the quotes so I can remember things or I can kind of see connections.

Steven Johnson (13:40.812)
And so we had that running in like, you know, October of 2022. And I think when basically when chat GPT hit, there was a little bit of a sense of like, what, what do we have that's kind of like native to AI that's in the works here? And it turned out like notebook, our little prototype was one of the most kind of furthest along of any of things that were happening. And so the team got kind of a little bit bigger and slowly over time we

We kind of launched in the US as an experiment and then we launched internationally and then we launched audio overviews about a year and a half later. Yeah.

Adam Brotman (14:16.378)
How did that come about? First of all, as someone who didn't know any of that, like, so you're saying that Google, the company who through the various branches of Google, I think it was the deep brain team wrote the attention is all you need and basically pioneered transformer models. And that led to LLMs that Google who also had the deep mind team.

in London, guess, probably still even though they owned them and AlphaGo. When ChachiBD hit, they're like, where do we have some native AI stuff? Like, that's amazing to me, right? Like,

Steven Johnson (14:54.656)
Yeah, they had great, they were really, they actually had the chat interface with a model that was called Lambda. They had that super early, but way before ChachiBT. And that was one of the first things that I tested. they had made a decision not to make those models public, which was, I actually think was actually a kind of a valiant.

Adam Brotman (15:08.26)
Right. Right.

Steven Johnson (15:21.614)
choice because they weren't really ready in a way. But so they had done a lot of great work, but there just weren't a lot of fully realized native applications. And there's a good reason for that, which is the models just weren't good enough. We built Notebook knowing generally what the trajectory was going to be in terms of improving the models. And there was a long stretch of time where you just wouldn't get reliable answers, or the grounding wasn't good enough, or if you had more than a couple

hundred, you know, a couple thousand words of source material, it got fuzzier because the context wasn't big enough. So there, I think there was a reason to not be shipping things. But, but, you know, we just had a, we had a, had an approach that was distinct, the source grounding approach, like that was genuinely like novel. Yeah.

Adam Brotman (16:02.97)
Right.

Adam Brotman (16:11.216)
Yeah. That was spot on the source grounding. then so so before we get to let's turn it to enterprise in a second, but just staying with the origin, because this is interesting. I didn't know this. I'm I'm learning and I'm guessing anyone listening to this is finding this interesting as well. Like you're so you so Google through you and this notebook LM early effort in late 2022 is building something

for writers? who is your intended audience? Students, writers, researchers? And then talk a little bit about how that and then also when did audio overviews kick in?

Steven Johnson (16:42.136)
Yeah.

Steven Johnson (16:49.646)
Yeah, yeah. Okay, so two important points here, I think. There was a very early stage, that early stage in like the fall of 2022, it was like there was a user of one, like audience of one, like it was like, what would Steven need? we kind of just like, it enabled us to go quickly because it was just like, whatever Steven's workflow is, let's just like build a tool for that. so that was helpful for like two months. But then quickly, like we got a proper product manager,

Adam Brotman (17:05.411)
Right.

Steven Johnson (17:19.182)
Verizon Martin joined, we staffed up a little bit and then it was like, okay, can't just be Steven here. You know, we need to broaden the audience. But what we, what we chose to do, and we had to fight for this a little bit. It's like, we were like, we think this is actually like a big platform. And so while we will think about very specific user journeys as we develop this, we are not going to define our, like people kept saying like, is this for students? Is this for writers? And we'd be like, yes.

And yes, and it's for knowledge workers. It's for anyone who works with information that is scattered across a large number of documents who has to pull insights and make connections and understand the material that is scattered across a lot of those documents. whoever like fits in that category, those are our users. And we kind of just chose not to be too focused in those early days. And I don't, that may not be.

the way that you're supposed to do it, but it's the way that we did it, because we didn't want to get, like, we thought it was a bigger platform than just, like, just a student play or just a writer play. So the audio overview thing is really cool. It's another great lab story. So there was another team inside of labs that had developed this amazing podcast generator using these new conversational voice models that Google had. So there was some, like, amazing new voice technology.

there were kind of voices trained on people in conversation. wasn't two voices that you stitched together in a script. It was like these two people had sat in a room for hundreds of hours and talked to each other and the model built an understanding of how they converse with each other. So we had that tech and then we had kind of grounding in the idea like, we could create a script based on your documents that could have the feeling of an engaging podcast with two fabulous hosts, just like we have right here. And

Adam Brotman (18:53.262)
Interesting.

Steven Johnson (19:08.366)
So we, I'd heard the demo. It's really funny. I'd heard the demo and I was like, that's amazing. And it like never occurred to me to put it inside of notebook. I'm like, I was talking about Luke, like, you know, you have blind spots as even if you know the product as well as I do. And it was right before IO and, you know, our big annual conference in 2024 and Josh Woodward, who at this point was running labs. There was

It was such a cool demo, but it didn't live in any product. And we didn't want to show something at IO that was like, here's a prototype of something. We didn't even know where this goes. And like eight days before IO, Josh was like, what if we put it inside of notebook? Like, what would it look like if audio overviews became a feature inside of notebook? And I had a moment of like, what? And then I thought, wait, we're building a tool for understanding things. One of the primary like,

Adam Brotman (19:44.728)
Right.

Steven Johnson (20:03.182)
modalities that people use to understand things is listening to a conversation between two engaged people. Like that's why people love podcasts is they remember it. They can listen to it when they're driving. can do it. If we could generate amazing podcasts based on people's sources, that's exactly in our sweet spot of our mission. so that's like from that moment on, I was like, I'm sold, let's do it. And so we jam, we like raced for like seven days to get this into a functioning demo that Josh could show on stage at IO and then finally released it like.

three or four months later in the fall of 2024. it just like, it's the most viral thing I've ever been associated with in my career. Like it just took off, like, you know, everybody was buzzing about it and, and it just kind of launched us, which was great. Cause we.

Forum3 (20:43.662)
And can you talk in generalities and whatever you're allowed to talk about publicly about launch and results and usage?

Steven Johnson (20:53.292)
Yeah, I think I could say we have like millions of active users. And to give you some sense of it, like almost instantly from the growth rates we were seeing, both Workspace and Cloud and Workspace for Education all said, OK, we need a version of Notebook LM that we can sell to our customers. And so we spent actually

three or four months just like helping those teams assemble those versions of Notebook. And so now it's like, it's become a major part of kind of the suite of applications that Google offers. And we're just seeing, I mean, we're continuing to grow at a kind of slightly scary rate since then.

Forum3 (21:43.726)
And that serendipity of adding in the audio overviews turned out to be a critical genius move.

Adam Brotman (21:43.824)
So do.

Steven Johnson (21:55.926)
Yeah. And, and as I said, it was not my idea at all. So I just, it's important to remember like, you have to let other people be the judges sometimes. I, one of the things about it was by the time audio overviews came out, the models had really, we'd moved over to Gemini at that point, 1.5, which was really like the breakthrough for us. and it was, it was a good product, but it was very hard to show people or share.

How is it like the best way to experience how good a product it was, was to load up a bunch of very complicated documents that you knew very well and to ask a very sophisticated question and get back a long answer with inline citations that take you to the exact passage of the original source so you can read and fact check and all this stuff, which is incredibly powerful in so many different cases, but it doesn't play well on like Tik Tok or Twitter, right? Like it's not something you can just share and have people blown away by, but even though it's very impressive.

But audio overviews gave us that thing. So suddenly all these people on TikTok were uploading their journals and turning them into podcasts and then sitting there filming themselves, listening to these AI hosts discussing their lives and being blown away by it. So it gave us a viral thing that we didn't have.

Adam Brotman (23:01.978)
Right.

Adam Brotman (23:07.13)
Yeah.

Adam Brotman (23:10.832)
You know, I didn't know this story. So this is like, it's fascinating to me. And I don't want to, you know, I'm supposed to, Andy's going to get mad at me if I don't ask you the next question, which is, yeah, cause I mean, I want to ask you, I enterprise, enterprise use of it. But before I do, I want to just like be more authentic and just say, first of all, like it's incredibly impressive. Your insight, not just your New York Times magazine article, but your insight on this product. And by the way, you say,

Forum3 (23:20.87)
No, no, keep rolling. This is great.

Steven Johnson (23:25.133)
Yeah.

Adam Brotman (23:40.612)
you built it for you, I can totally relate to that. Like I, I think I drive my, my friend and my co-founder crazy sometimes because I want to just build products for myself, which is not actually probably, in fact, Andy would tell us it's not your, that's not the best practice probably for how you develop a software product. However, when it works, it works. Right. And there's a little bit of like,

the creative genius part where you're just like, look, I don't know how to like research this stuff. I just want to build something that I insanely know I want to use. And if I do it well, it'll work out. Now, in this case, when I've discovered Notebook LM, it was in the fall of 24 when you introduced audio overviews of Gemini 1.5, I think behind it, and it went viral. you know, one of our favorite podcasters, Paul Raitzer, was mentioning like, you got to check this out.

This is like the coolest thing I've seen since I saw a chat GBT. And, and you're right. Like it was the, was the like advanced, advanced audio combination with the source engineering that went into it. That just made it just mind blowing that that was, you know, what people were so excited about. But for me, and I say this all the time, I think I said it online. You saw it. was like, it's not.

It makes me feel good because it was your original intent as the co-founder of this, which was that I look at it and I go, this is the best application and user flow I've ever seen for being able to sort of get to answers from multiple sources. And which ultimately is, think AI superpower. I think AI superpower is helping you.

Steven Johnson (25:25.122)
Yeah.

Steven Johnson (25:29.23)
scale.

Adam Brotman (25:32.706)
as an executive or an individual or anybody to just go, there's these disparate sources. I, you know, in the case of notebook, let me, kind of need to know your sources, which now I understand the background. And I know you guys are working on it sort of being supplemented by internet search, but like the notion of like, I've got these unstructured data sources that I know are like somewhere in there. If I could put them in a pot of stew and I can make my answers in there somewhere.

which is what you said was your original sort of inspiration. And then you take advantage of the long context capabilities of Gemini. You build this thing the right way and voila, you've got this ultimate custom GPT on steroids is another way to think of it. And that's really what it is. And so you came up with the audio overviews, it went viral. And I liked the audio overview as much as anybody. They're the most sort of magical part of it on the one hand, yet.

When I'm talking to executives now about Notebook LM, it's part of our standard training for C-suites as our AI bootcamps, we point on Notebook LM, we talk about the audio overview still because you have to, and they're amazing. But I pointed out that it's actually the power in it is in really getting to insights around something that you were sort of close to because you know the sources are.

Steven Johnson (26:39.374)
I'm nice.

Adam Brotman (26:59.93)
And we're good at like, know somewhere in those sources is my answer. I'm to put these things together and get an answer. So that's how I describe notebook LM. How, how do you describe like when you're, when somebody says they've never heard of notebook LM and they're in the business context, how do you describe like the best use cases?

Steven Johnson (27:00.056)
Yeah.

Steven Johnson (27:11.628)
Yeah.

Steven Johnson (27:16.269)
Yeah.

Yeah, so let me just say very quickly one quick thing on what you said at the very beginning there, which is on kind of like building products for yourself. Like I think the important thing is that's a great way to start in building a product because you can go quickly. But the other thing is you have to learn and I had to learn this kind of quickly, I think, but I had to learn it. You have to be able to then widen it and give up your own kind of

personal obsessions and let other people in the team like add their own obsessions to the mix. So many things that have happened like in the last two years since that early prototype have come from other people on the team. And I'm like, it's no longer at all like the Steven project. an audio overuse is the best example of that. Mind maps is another feature that like came from like a junior engineer that has been amazing, has really taken off. So in terms of the enterprise use, the, I think,

You know, you've described it well, right? That to do your job, if you're a CEO or you're head of the marketing department, or you are working on a startup and you're trying to like build a company, there's just a lot of information that is scattered across a lot of different documents that you need to work with in one form or another to get what you're doing. I think there's a very like...

Factual lookup kind of use of notebook LM that you invoke that is definitely part of it. Like what was our q1 forecast? I can't remember the exact number. You can just natural language query it. You'll get the answer You can quickly go read the original document amazing But there's more than that and I think that the thing is that these models are now capable of really like a deep understanding of the material and doing kinds of pattern recognition and an analysis that

Steven Johnson (29:06.665)
is just insane. Like no computer in the world could do this kind of thing two years ago and now you can do it in a free app. So one great example is I have a fake notebook for a fake company that I've created, which is a smart composting company called Smarter Compost. It's a high-tech composting company and I use it to test new features and new models in this kind of enterprise environment. In this case, it's a startup. And so I've built out like

user interviews, sales call transcripts, fake board meetings, like fake product description sheets, fake business plan, that I have this whole like, I'm gonna start this company, I think, because I've gotten like so into the world of this company. So one of the things you can do is you can say things like, based on that last, you know, those last 10 user research interviews, identify the top five pain points for the users. Give me a quote from each user.

about each of the pain points and briefly describe what the pain point is. And it'll just go through like a hundred pages of raw transcripts and pull out, figure out what the overarching pain points are, organizing and create this amazing kind of like, almost like briefing doc of those pain points. And then you can be like, all right, yeah, tell me a little bit more about number three. You can drive into that a little bit more. And then you can be like, okay, suggest two specific improvements to the product based on that user feedback. And it'll be like, you can change this. You can change that.

And so you really can co-create with the product. You can have it do very high level, like that kind of like thematic analysis of a hundred pages of raw transcripts. Like how long does that take a human to do it by hand? It takes like 10 hours to do that kind of analysis, maybe more. Notebook will do it in 10 seconds.

Adam Brotman (30:55.109)
Right.

Yeah, sorry, Andy, to dominate here, because I'm such a fanboy of this product and what you did. I'd say that, like, when you just said that, tell me if you agree, Steven. So let's the three of us talk about, like, just the power of AI for AI-first organizations and sort of how people can just be thinking about not just notebook LM, but AI in general and how notebook LM is, I think, so interestingly positioned, which is that

You can take in the world today, you can, what I'm finding the most interesting thing about LLMs or about AI systems in general is when you throw a lot of different types, even of, of unstructured data at the LLMs and don't try to like get, don't be scared of like the ability of mixing those things together. Like I, to your point, transcripts with structured data, with documents, with presentations, like I've found

particularly recently, because of these thinking models and because of, and they're agentic almost themselves, when you just give a lot of disparate but related information to the LLMs, they're just incredibly good at quickly going through it and synthesizing and giving you insights back. before you might've been scared to be like, you're gonna confuse it or you're not giving it the right type of information.

Like these, these LLMs are just incredible about how they can just go through it. And so what I found so interesting about notebook LM is that it's like a application that is doing that at its heart. Like it's, it's going here, give me a bunch of sources. I can give you hundreds of sources. Actually, if you get the right version of notebook LM, they can be of different types. And then you can just, people always say, I want to be able to talk to my data and I want to be able to get insights from this data. Like you, you've really.

Adam Brotman (32:55.714)
in this one program kind of hit at, think the magic of LLMs and of AI systems, which is that they're just, it's not so much your prompt as much as the source engineering. when you give, when you give as much context to these LLMs as possible, you'd think like, that's going to confuse them. It's going to get drowned out or I got to be a smart prompter. It's incredible. They just take it and they, I think it's magical how they just, they just make sense of

in a way that doesn't make sense to me. I can you comment on that? Do you agree that notebook LM is almost like that in its essence?

Steven Johnson (33:29.582)
Yeah.

Yeah, 100%. Well, all those things you're saying are very nice. And I do agree with them. there's, used to kind of joke.

Forum3 (33:39.026)
Unlike the Twitter comments that you got at the beginning to your ears.

Steven Johnson (33:42.858)
Yeah, yeah, yeah, yeah. No, it's nice to be self-invalidated after all these years. I sometimes would describe in the early days, I mean, this is a very nerdy kind of way to describe it, but it's partially true, is that Notebook LM is basically a UI for allowing you to very easily move things in and out of the model's context window.

Like that's it's kind of what it what it's decided to do is like I quickly want to give this for them because the context window is basically like hey AI pay attention to this and So a tool that lets you very quickly be like now I want you to focus on this now I want you to focus on this now I want you to focus on this given that there is a finite amount of context available and given that the models are Massively more accurate and useful when they are given something to focus on

Adam Brotman (34:11.78)
Yeah, that's right.

Steven Johnson (34:36.142)
So we've kind of built a tool. one of the things you can do in notebook that sometimes people don't really realize is that all the sources in the left-hand side have check boxes next to them. And you can dynamically uncheck them. And whatever is selected at that point, the model is focused on. And whatever is unselected, the model can't even see anymore. And when we were building that, I was there in the very beginning of the product. When we were building it, I remember some folks were like, no one's ever going to understand what that is.

Like no one's gonna get that they can dynamically change the context of the model and like those boxes are like the third most clicked on thing in the app Like people totally get it now once you learn it, but it is a new convention of like I'm shifting the focus of this AI on to something new

Adam Brotman (35:19.534)
That's right. But what's so great about that and why I think people did intuit that is because if you think about one of the most used things on OpenAI's platform are these custom GPTs as people call them, right? Where you basically can like, if you think about a custom GPT is it's a state, a structured prompt, right? And you can put some knowledge files, they're called, and then you can write your instructions and, you know, and then you can chat with it. You can share it.

Steven Johnson (35:31.575)
Yeah.

Adam Brotman (35:49.486)
And yet it's got limitations. It's got context window limitations. It's got model limitations. It's got, you know, it's really a pain in the rear end to actually switch around your knowledge files and stuff like that. You guys took that and put it on steroids in a way, in a sense. it's like, and I'll tell you like when Andy and I see rap raps out there in the wild, which is like every app is basically a form of a rap rap on some level. I they're all just, when I see them, they're all just, yeah, like,

Okay, what's my system prompt? What's my data source? You know what I'm saying? Like what's my UI? And so you guys would so great is yeah, like being able to like throw up basically have the equivalent of a custom GPT, but with much more sources have to be dynamically easy to sort of check and uncheck the sources and really allow the model to focus on what you want it to focus on in that moment. That's pretty cool.

Steven Johnson (36:44.118)
And now what's coming, the next iteration of this that we're gonna start rolling out, or depending on when this airs, we will have just rolled out, is basically thinking about Notebook also as a distribution platform for information and knowledge. So the single biggest problem we actually have is our onboarding experience is really suboptimal. You get dropped into like,

Forum3 (36:47.738)
Yeah, give us the senior.

Steven Johnson (37:14.024)
add sources to this obscure source panel modal that pops up. And a huge number of users are just like, I have no idea what this is. See ya. They sign the terms of service and then they go. But when you've spent some time inside a really nicely curated notebook that has great knowledge and really high quality sources, and maybe there's an audio overview that's already generated that you can listen to, you really see the value of the product. You're like, wow, this is...

knowledge here I can explore in this new way, I can create a mind map, I can do all this stuff. And we think actually that it's a fantastic way to share information with people inside of organizations for maybe creators to share. And so we're just rolling out now the ability for users to create public notebooks that are just accessible at a URL to anybody who has a Google account. And we think that that's incredibly useful.

in the business use case, you can see it in people like, hey, we've created our, you know, a basic description of our product line as a notebook and we've uploaded all our manuals. And so anybody who gets this notebook instead of like reading a PDF can go into notebook and ask questions and your answers will be answered there. Scholars who are publishing important research can publish it as a notebook so that people can engage, ask questions.

understand it at different levels. Say, hey, I'm a 10th grader. I need to understand this complicated scientific paper at a 10th grade level. I want to listen to an audio overview about it. And we're going to start curating notebooks with some partners where we create a notebook on a given topic. So one example with public domain information is we've got a notebook with all the earnings reports and analyst Q and A for the top 50 companies around the world for Q1. So like

It's just an amazing compendium of information. Like if you're a financial analyst, you're trying to understand the impact of tariffs, show me the main trends over these like Q1 reports, like related to tariffs. And you get just an incredible document in 10 seconds. So there's right now it's like a productivity tool. It's a tool for like gathering your own sources and interacting with them. We think it's also a kind of a publishing distribution tool as well. And that you're see a lot of that in the next couple of months.

Forum3 (39:00.742)
That's awesome.

Adam Brotman (39:26.381)
Is there a

Forum3 (39:26.684)
That's awesome. That's let me interrupt. That's awesome, Steven. Unfortunately, I think we have to start closing up. My sense is as Adam was leaning in that we can continue this conversation easily. We will have you back. Adam, I'm curiously, I could tell you were thrilled to have Steven on today. What jumped out for you from this episode? What do you want to call out for our audience?

Steven Johnson (39:37.698)
I'll come back anytime guys.

Adam Brotman (39:50.49)
Well, two things.

I have two things to call out. was surprising. I didn't know we were going to learn this. I think that just understanding the origin story of Notebook LM and Steven, your personal background and sort of how you came to Google and whatnot, like really makes me appreciate Notebook LM in a different light because it really stood out that this is really, you know,

You've been working your whole life, your whole working life on like cool ways that technology can help organize your thinking as a second brain around sources and quotes and other things that you know are relevant, but you want to mix them together and have access to them in a different way. that you know, LLMs basically allowed you to create a native AI application to do that. And I didn't know that story and it makes me even appreciate more of the platform. Right? That was it.

And that's an important takeaway for people to understand that while Notebook LM has audio overviews that the origin story is interesting because I was only saying the first thing first, which is that the origin story is really relevant to understanding the product so that you can get more use out of it. Because if you actually understand that it was originally developed,

not to have the audio overview per se, but the audio overview really takes advantage of the original vision of the product. That actually makes sense to me. And it's how I've been explaining to people the power of the product is more about, you know, source engineering your way to some insights. And that's number one. Number two, I loved what you said about how in a way to get the most out of an LLM, it really is about

context engineering and how do you make sure that an LLM, which is so powerful, can pay attention to the right things in its context window? And this is a context engineering app and gives you a lot of control from a UI perspective. those were the two insights I took from this, which was great.

Forum3 (42:29.618)
Yeah, for me, mean, Steven, this is a different type of episode than we've ever had on our national podcast. Most of our episodes are with business people talking about their AI transformation journey. And in many ways, your story is about in part a personal transformation, but also the Google transformation and, how a

Forum3 (42:59.53)
a small team at Google managed to come out with a blockbuster hit by focusing on AI native applications. And I think it really was the first one after ChatGBT. Now ChatGBT was not, you know, they weren't a big company, right? They were a, they set out and they just innovated really before Google did. And Google had all the tools in this, you know,

There's parts of Google's reluctance to release AI into the public that should be applauded. And it really took OpenAI and ChatGPT to initiate and have their viral hit with ChatGPT 3.5.

But Notebook LM, like kudos to Google and to you on both of those, like forgetting a native AI application out there and eight days before the Google I.O. launching it, like that story, I think, you know, and the market responded, right? Like it was that it was the mix really of a career writer.

you know, agreeing to, you know, haphazardly to take a half time job at Google and lead, help lead this team. So I just think it was that that's that whole story really resonated for me and kudos to Google and kudos to you. Like it was, this is the first, I think of what we'll see many native AI applications. I'm really excited, looking forward to seeing what Sam and Johnny Ives, you know, put, put together.

maybe Apple will get their mojo back and put something out. mean, Microsoft clearly like, you you're seeing large corporate companies, large enterprises, you know, introduce native applications and it's really, really exciting. And notebook LM was the first and the success resonates. that.

Steven Johnson (44:42.574)
Yeah.

Forum3 (45:03.886)
It is partly a digital transformation story. It's just a different kind than the ones that Adam and I focus on. So that's what jumped out of me. Steven, any closing comments from you for our audience?

Steven Johnson (45:16.758)
I mean, I love this conversation and yeah, it has been an amazing ride and I just want to stress how collaborative it has become. When we tell the origin stories, it's a little like me focused in a way because it was a small group at the beginning. it's an... No, there's so many things, so many things that would never have occurred to me that we ended up doing that were brilliant.

Forum3 (45:35.942)
That's in c- Steve, didn't take all the credit, and nor did you. You didn't take all the credit, nor did you.

Steven Johnson (45:46.502)
so, it has been the most collaborative, you know, work project of my, of my career. would, I would say I've got a couple of things. Yeah. Yeah. Yeah. But I did startups that I did TV shows that were pretty collaborative with this one. It's a real team effort.

Forum3 (45:53.646)
as the solo author.

Yeah, yeah, that's true. That's true.

Yeah, Steven, thank you for taking the time and being on. We will have you on again. and with, with that, thank you all for listening to AI first with Adam and Andy for more resources on how to become AI first. can visit our website, form three.com download case studies, research, briefings, and executive summaries. We have a lot of amazing content and material on the website for those looking to expand their knowledge. We think.

that you can't over invest in AI. Thank you.