“Delve is an insight engine that transforms how brands understand their world.”
With every story, thread and meme battling for our attention, what do we focus on and care about? Communications pros Sean Garrett and Allison Braley—trusted by Twitter, Amazon, Meta, Slack, Bain Capital Ventures, and more—talk with those shifting the future of communications and who pays attention to what.
Sean 00:00
Welcome to Attention Shift. We unpack where communication and communications is headed. I'm Sean Garrett and supported by Delve, the top level context engine for comms. My co host, Allison Braley, and I talk to industry experts about how we show up and get our message across in an era of limited time and shorten attention spans. Let's dive in.
Sean 0:25
Kevin Roose needs no introduction. As the New York Times tech columnist, host of Hard Fork podcast and author of several best sellers if you're in tech, Kevin comes through everywhere like surround sound.
Allison 00:40
Kevin published Future Proof: 9 Rules for Humans in the Age of AI in 2021, so he was really early to the AI conversation, and he has a whole lot of thoughts on how AI will change everything and what will stay the same. And so obviously, in my world, in VC land, at BCV, we're talking about AI all the time. All of the top talent seems like they're attracted to what is shaping up to be a generational opportunity to build companies. But I keep going back to as I work on messaging with these companies, when everything is AI, is anything AI? It's reminding me just to be an old person for a moment of mobile in the early 2000s where it's like, we have an app, we have app, and then that became incredibly uninteresting as a thing to say because of its ubiquity.
Sean 1:22
I think we're in a transitional phase. I mean, I think we're…
Allison 1:24
Six months, a year?
Sean 1:26
I think two years probably. So, yeah, I think, I mean, because I think that momentum. No, no meaning, like, how long we will be referencing AI all the time. And I think it's similar to the dawn of, like, I have a website, come, come see me at SeanGarrett.com. Like, that was everywhere for longer than two years. But that was, like, the mid, mid 90s. And then, like you said, you know, everybody has an app, yeah? Well, for a while there wanted a blog, then it was an app that it was like, “What are you doing on social media? What's your Facebook?” Right? And so I just think, like, this is a similar type of phase. Obviously, we can argue about how transformational it is relative to those other big phases. I think it's pretty transformational, but I do think that for a while, AI is going to be a thing we put into lots of different conversations, until it is just either ubiquitous or it's assumed.
Allison 2:24
Yeah, and there's all these words within AI too that companies seem to want to use to signal new again. Agentic, agentic. And we have super intelligence now starting to crop up a lot. And my worry is it feels like maybe this is again, oldness, but we're going at such a fast pace with these words, where before a word would mean something technical and specific, and you could use that word for a while to still mean that. Now everybody's saying agentic, and probably 30% of those companies are actually doing anything agentic. And so the companies were using that word correctly are sort of penalized for being in a category that's that's…
Sean 3:04
I don’t think there's rules.
Allison 3:06
I know there aren't rules, but it disappoints me as a communicator, because it causes us to have to find new and novel ways.
Sean 3:16
It’s just your job to come up with something more snappy.
Allison 3:17
Sean, not trying to do real work here. Can we just cut to the part where we have these shortcuts, but what's, what's the new agentic? What's the new super intelligence? It feels like we're on a three month clip to having to retire.
Sean 3:30
I guess. Like, my point is, like, why I wouldn't even, like, focus on that stuff? Like, right? Like, I would be focusing on, like, whatever the company does. And to me, that's the sidecar part of it. Like, I don't think you need to, like, worry about, like, the different phrases that you're using to define the specificity of your AI and what it does, because I think, to your point, it's changing so fast. Yeah, it's hard to, like, take it in. Like, I mean, there's obviously certain, certain trends, like vibe coding, for example, right? You know which Kevin talks about.
Allison 3 4:09
Try vibe cyber security… doesn't work as well.
Sean 4:11
No, does not. I mean, vibes are good that you know, until they're not, I think trends of usage of AI are probably going to be the things that I would lean into if I were communicating around it.
Allison 4:20
It depends on the category, to me. So if we have a company that is building in the AI professional services space, those people actually don't really want to know that AI is involved, because it makes them more suspicious. It's like, you just do the work, and I don't need to know more about that. But then if you're selling to developers, I think it becomes a lot more important to be specific about the technology, because they're persnickety about it. They want to know how it works and get really under the hood and rifle around. And that's where I feel like we're losing the meaning on some of these words.
Sean 4:52
Yeah, I hear you. It's a… Yeah, no, totally. And I'm but I just think it's like, by definition this is, is and will continue to be one of the most noisy categories ever. And so we talked in an earlier episode around like big narrative walls that are against you, but like, this is just a wall of noise, right? And sometimes a wall of noise you either have to be the one to come up with the terminology that defines everything, yeah, define the category or, or not at all. Like, you have to be the one, like, comes up with a term SASS or something like, right? Yeah. And so, you know, I think, I think we're, we're still working in the wilderness there ironically. And maybe you should, like, someone should ask Claude or something to to come up with a better term. But I kind of don't think it really matters right now.
Allison 5:44
Yeah, I do, and I don't, like, I want to be able to just do this simplistic thing for every company where you're, like, ignore the AI and talk about the benefit. But I think a lot of companies are getting a boost in valuation and a boost in interest by being affiliated with AI and so finding ways to talk about it that aren't meaningless nonsense matters.
Sean 6:05
I think for your audience, for what you're talking about, especially like in an investor context, I totally get it like, where do you fit within this big category? What's your specific thing that you own within that category? For sure, that's super important. I'm thinking from a broader lens of just like, how people like, even like, tell the difference between any of these companies? Totally right.
Allison 6:28
It's a sea of same.
Sean 6:32
Exactly, and so absolutely, from an investor standpoint. But did I think like, you're gonna do a lot of work, and in three months you're gonna be changing it.
Allison 6:36
It has to ladder up to a core idea that's actually right, the benefit, but how you communicate, it seems to be more and more challenging. And I've also noticed on a visual design level, and shout out to Annie, our designer at BCV, but they all have the same look and feel. It's like a black background on the right, some sort of rotating, nebulous image, very sharp edge, kind of Swiss design style. I do wonder too, when that's going to get played out. It feels like we're all and we're all calling ourselves something—Perplexity…. all of these types of names, and again, it's becoming harder and harder to stand out if you're just running that same AI playbook. I think that companies are not going to be able to be this lazy anymore very soon and get away with it.
Sean 7:24
No, I mean, it's the same. I mean, we saw this like the direct to consumer trend of the teens, right, where everyone had the same Instagram looking kind of vibe, and tell whether it was like you're selling, wait, cat food or a mattress, like, it all looks exactly the same. And, you know, I just think we, we, we're not, we're not very complicated. We're pretty simple things and people. And, you know, I just… It always that's like, to me, I look at that, I'm like, what is the like? That's a huge opportunity. Like, why? How can, like, we zig while other people are zagging over there, doing all the same thing. And so one would hope that someone's looking that at that as an opportunity, like, “Oh, this is a really interesting way to, like, retell or reskin this story and of who we are and why we are.” And that's probably going to be the only way that you're going to be able to hire people and, you know, kind of like, wake up different investors. We're seeing the same stuff and customers, exactly. So I don't know. I think it'll I think that will come, that's just natural.
Allison 8:32
But we're in this really weird, nascent phase of copying language, copying visuals, just to say, look, look, I'm like that company that's a unicorn, so I'm the same. I don't think that's going to work for too much longer. But, you know, talking a little bit about social media, as you mentioned, Kevin Roose, who we're talking to in a little bit, has his toes in a tons of different types of media. So he has a podcast, he has books. You know, he's at the New York Times newsletter, social, but the media industry that, in the traditional sense, is dramatically changing as and it has been for a long time. So first, obviously social media decimated media, and now I think AI is increasingly starting to pull clicks away and steal content and and all of that, so…
Sean 9:17
But you know, who doesn't use AI? Obama's chief speech writer!
Allison 9:22
Oh, yeah, that amazed me. But he's just even in the podcast episode, he's such a phenomenal speaker that I was, like, almost in tears at points. He doesn't need AI. If you're sitting at home and wondering if you need AI to be a good writer, you do.
Sean 9:36
He makes a point that he uses sometimes, not for, like, some research and, yeah, deep research or whatever. But, and I think, like first drafts and all that could outlines things like that. But I'd be very curious if the Obama administration existed today and that speech writing team was doing work today, like where the pressure points of AI would come in, because, like, let's be realistic, as like, Cody talks about, like, the you know, they gotta crank some stuff out from time to time. You gotta just, like, make stuff happen really fast. Like, new issues pop up. And, like, once you use it, and it gives you a half decent response, that allure of going back in this, like, I'm just gonna get one more hit here, and like, you know, maybe it'll help me the next time and the next time. And then it just becomes part of your process, for better, for worse.
Allison 10:27
And as it starts to learn your style, I think it gets better too. Like, the initial first draft is always the most basic, sort of worst, viable version of what you're trying to do. But as it starts to learn your style, we've invested in a company, CREA for design, for example. It takes a little bit of input, but once it learns your style, you can say, like, whip up an invitation for this AI builders weekend in Stinson. That's a watercolor of the beach in BCV style. And it's like, sure, and you're getting back things that don't require an enormous amount of tweaking. So it's kind of you get, you get out what you put in to some degree, at the initial baseline of, like, write me a speech for the company All Hands. It's going to be one of the worst things you've ever read.
Sean 11:11
Maybe think about—speaking of speeches before we get into Cody—there was a banquet last year, and I'm not going to like, out which kid of mine did this. But like, all the kids talked about how, like, how awesome the seniors are for a sports team. And basically all the kids later admitted, like, “Oh yeah, I wrote my speech using, you know, chat GPT.” But they're all kind of, like, these very sappy, very kind of emotional, quick vignettes. And they feel really heartfelt. And they're like, yeah, no, of course, I wrote that in ChatGPT.
Allison 11:41
Similar story. Back to the precision on language. My daughter is in third grade, different stage, but you had to do this thing where you had to give a compliment to each kid in the class, and their like, book of compliments, because they're working on, like, kindness. I mean, we should all, we should all do that in every work setting, but the diss compliment became unique, and so like, if you got unique, it meant, “You're weird.” So my daughter got three uniques, and was kind of like, I know who wrote that, and we're not cool; we've got beef. So there's euphemism words are some of my favorites. I wonder what ChatGPT would have to say about unique. They would probably make it a compliment. But until you're so smart for inquiring about it. You know, is unique in a good way? Kevin. Kevin Roose has significant numbers of predictions about AGI belief systems and the state of journalism in 2025 so we're going to hear from Kevin Roose on today's episode. Over to Kevin.
Allison 12:21
Welcome to Attention Shift. We wanted to get started today by asking first, we're big X-fans of yours, and it looks like you wrote recently about this weird overlap between people who say that climate experts should be taken seriously, but say AI is sort of nonsense. In thinking about that. How did you come to that conclusion and tell us more about that observation?
Kevin 12:58
Oh, you mean X, the social network. I thought you were saying you were big X fans, like you used to be fans, and then I did something, and you're like, you're like, I don’t know about that guy.
Allison 13:12
Oh, well maybe after today we’ll be ex-fans. We’re still fans as of today.
Sean 13:13
And also, Allison's only speaking for herself when it comes to X.
Kevin 13:17
Yeah, you're off the sauce, Sean, but I know you're still there lurking. You have to be. So, okay. X, the social network run by Elon Musk. Okay, so that post was an attempt to capture this disconnect that I see between the people in my life and in my community and in my professional peer group, who are very dismissive of the possibility that AI is powerful, useful, that it needs to be taken seriously, that improvements In AI capabilities could have meaningful consequences for labor forces, for regulation, for the economy as a whole. There is this strain of AI skepticism that I find really baffling and frustrating. Which is that the people the very same people who would say you got to listen to the experts when it comes to climate or COVID or any number of other issues, are also saying when it comes to AI, we can't trust the experts at all, when they say that powerful AI is coming, that these systems are already quite useful and powerful, and that they're getting more useful and powerful every day, and that we need to start making decisions now about things like regulation, because we are only a year or two away from having something much more powerful than even the models today.
Sean 14:52
Why do you think this is?
Kevin 14:54
So some of it, I think, is just geographic, right? Like, I mean, I'm sure you observed in the Bay Area, people are behaving as if we are months, if not weeks, away from something truly seismic that will reshape the economy, that will establish an AI overclass, and that this is all going to happen like within the next 12 to 18 months. That's not a fringe belief or an exotic belief in the San Francisco Bay area. But if you go outside that, it sounds crazy. So, I think some of it is just the sort of intellectual waters that we swim in here without really realizing, and maybe the bubble that exists here. But I think there's something different for journalists. I think journalists have become conditioned to be skeptical of big, sweeping promises and promotions from the tech sector. Web3 was supposed to revolutionize the internet. It didn't. The Metaverse was supposed to revolutionize the internet. It didn't, at least it hasn't yet. And so I think for a lot of journalists who are, you know, I think, to their credit, tend to be more skeptical than people in industry. I think there's some pattern matching going on of saying, well, the last three or four times you all in tech told us that everything was about to change forever. It didn't happen. Or at least it didn't happen the way that you said it would.
Sean 16:15
Yeah, but wouldn't AI be more considered a shift along the lines of, like the consumer web starting and the dawn of smartphones, which were actually like pretty big things?
Kevin 16:25
I think so. I'm a believer that AI progress is real and needs to be taken seriously. And by the way, that doesn't mean I think it's a great thing. I think there are real risks and dangers. And so I find it frustrating when people use their sort of blanket skepticism of AI to say, like, we don't need to do anything to get ready for this, because it's all fake. It's all hype. It's all a big bubble.
Sean 16:48
Yeah, but I guess what I'm saying is, like, I mean, for the people who are more educated, who are looking at this, like, I understand how Web3 or the Metaverse, you know, didn't go like it was supposed to yet. But those seem more incremental, versus like, you know, the dawn of the web in 1995 or, you know, iPhone in 2007 seem more fundamental. And this seems more fundamental. So is it just that we're now in a place where just no one wants to believe anything? Or is it just people, they don't think it's fundamental of a shift?
Kevin 17:22
I think there's some of both. I think there's a there's some people who just don't think it's as fundamental a shift, who look at the limitations of today's AI models, or what they perceive to be the limitations. You know, they, they don't always get things right, or they, they sometimes have this like false confidence about them, and they extrapolate that to say these things will never be reliable or trustworthy enough to use in any way. The Internet did not, you know, make errors like this when it came out, the smartphone did not, you know, call the wrong number two out of every 10 times. There's something about the sort of non-determinism of these models that makes people distrust them.
Sean 18:01
I think people are being very nostalgic about, like, how they don't remember how janky everything was.
Kevin 18:06
Like the iPhone, the iPhone 1, like famously dropped calls, and you couldn't hold it a certain way. And like, you know. So I just think people are not thinking in terms of, like, trajectories and exponentials and where things are going from here. But, I think to the second part of what you said, I think there are people who just do not want this to be true. Because if it is true that AI is real, that these things are useful, that they're capable of, in some cases, outperforming talented humans at economically useful tasks, that is a threat to them.
Allison 18:42
And it's easier to put your head in the sand.
Kevin 18:45
To put your head in the sand, to treat it as an encroaching enemy. I think there is a natural sort of fear-based response to seeing something that may pose a threat to our way of life. And by the way, like, I'm not immune to that either. Like, I'm a journalist, I'm in the words business. These things are quite good at words, and so I do the thing that everyone does, where I'm like, oh, it will replace a bunch of workers, but not me. Like I'm special. I have these skills that can't be replicated. And I think that's just a tendency that we all have.
Sean 19:19
It’s funny, because, yes, there are obviously you can run podcasts now using AI that sound very real. So this could be replicated very easily.
Allison 19:26
My theory on the whole thing, Kevin was maybe you were just posting on X that AI is so smart and powerful to kiss AI's butt so that when it's in charge, it will keep you around.
Kevin 19:36
I do have some reputational repair work to do. I need a little bit of, like, you know, whatever the the AI chat bot equivalent of, like, crisis comms is.
Allison 19:49
I mean, you're smart, you're in charge.
Sean 19:50
But how soon will you be debating with AI on, on X?
Kevin 19:55
I mean, I assume I already am.
Sean 19:58
I mean, but like, really, like, I mean, like, in a real kind of, like, real time way, like, thought, like, there's, like, literally, like, AI, creating accounts coming up and having conversations with people today and or spinning up YouTube videos to be like, you know, this thing just happened, you know, and it's not about AI, it's about this thing in the world, and it's a person debating some political issue.
Kevin 20:17
I think that's already happening. Sean, I don't know if you've ever clicked the like see unhelpful replies button on X when you post something, and it hides some of these because it detects that they're bots.
Allison 20:28
Sometimes, those are not safe for work. Other times those are bots.
Kevin 20:30
Sometimes those are porn. Sometimes they are just LLM.
Allison 20:36
If it is a helpful reply. Who are you to judge?
Kevin 20:38
Exactly. And my assumption is that I personally know people who run their tweets through LLMs to sort of like write them and improve them before they post them. So whether it's humans actually pushing the button or not, I think a lot of what you see on social media is already being generated by AI.
Allison 20:54
We've talked a little bit about how journalists are sort of reacting to this technology, and your thoughts there. But what responsibility do you think companies have around communicating about this technology? Because I'm always interested, as a communications person, you sort of have Anthropic in one corner, very open about being incredibly responsible about this technology. You have other companies sort of saying this is going to be really smart and powerful. Don't worry about it. And then you have Vinod Khosla the other week, was on stage talking about AI and very openly saying, we actively invest in companies where AI is going to replace jobs, and sort of being very open about that, where other people are a little cagey about it. Where do you think the responsibility lies and who's doing it well?
Kevin 21:42
I'll make a blanket statement, and then I'll answer the specific questions. I think companies are scared. I think they are very scared. I talk to a lot of corporate leaders, executives, people in charge of big organizations. They don't know what to think. They don't know what to do. They don't know who to trust, and they sure as hell don't want to get beat there, wherever there is, by their competitors. And so I think there is an urgency paired with a confusion that is making for some really challenging times for business leaders. And I think for some of them, they are worried about the political backlash if they were to do things that they already can do today. I know of companies who, you know, their leaders will tell me, look, I could, if I wanted to, I could take out 30% of our accounting department tomorrow with AI, but I don't want to be the guy who lays people off because of AI, because I think they understand that there's a reputational cost associated with that. So there are people in the AI world who believe that AI is going to cause sort of mass unemployment, that one day these models will be good enough that companies will just lay off their entire customer service department, their entire billing department, maybe some of their middle managers, maybe their engineering department will be shaved down to a fraction of its size. I don't think that's the likely trajectory here, because of this political and reputational risk. Maybe that changes. Maybe it's not taboo to to sort of say we're getting rid of a bunch of people and replacing them with AI. But I think today, leaders are fearful that if they do that, they're going to be singled out for criticism. Their workers are going to strike or unionize. It's going to be a big mess. And by the way, I think they're right to fear that, because historically, that has been one response that workers have had to automation in the workforce.
Sean 22:36
These companies are very big, with lots of money. They have a lot of abilities to communicate about this relatively effectively, but your point, they're being relatively cagey about it because of kind of like fear stepping in it. And that's whether it's internal communication or external or what have you. You know, you mentioned at the very top that people just aren't trusting the experts in AI, is there a just kind of deficit of like expertise and like honest conversation about AI? Obviously, journalists play a role in this, for better or for worse, just seems like there might be a vacuum of people to trust about AI right now.
Kevin 24:14
I think that's right. I mean, I think the confusion is related to the lack of trustworthy experts on this. Trustworthy and neutral, by the way. I mean, there are plenty of people you can, you know, call to show up and, you know, with an army of consultants and tell you that AI is going to revolutionize your business, but they're trying to sell you something. I think there are fewer people who are seen as authoritative and objective when it comes to AI. I try to be as objective as I can about this stuff, but I think there's a lot of double speak and people hedging and being cautious about this. I mean, one of the things that I believe very strongly is that business leaders and leaders of any kind need to be honest with workers about what's going on. I don't think it's a ethical thing to do to tell people they're not going to be replaced. I mean, you'll hear this all the time from from corporate leaders. Oh, we're not using AI to replace people. We're using AI to augment people or to make people more efficient. And that may be true now, but that is not the long term vision that these systems are being used to fulfill. The entire point of AI is automating labor that used to be done by humans. That is, that is the whole ball game, and nobody would be investing in it if they didn't think it would create massive opportunities to to do large amounts of work in a cheaper way than you can do today with humans like that, just the whole investment case for AI disappears. So I just think that people need to be really honest, so that workers can see it coming, so that they can prepare themselves, so that they can maybe retrain or get some other skills that could help them get a foothold in the labor market. Like I think that a lot of people are gonna be blindsided, and I don't think that corporate leaders have been doing a good enough job of being honest with people about what's coming.
Sean 26:06
How much of this is just because the speed of change has been so fast, it's just hard to keep up with it? I mean, you released that you wrote a book called Future Proof: Nine rules of surviving in the age of AI, and it came out in 2022 like, how different would your book be if you wrote it today?
Kevin 26:21
It would be pretty different. I mean, I think that a lot of the book holds up. It was basically, you know, I wrote that because I was hearing the early seeds of what would become Chat GPT and all these AI stuff that we see today from people who work at the AI labs. And so I was trying to figure out, like, okay, well, what do we do? Like, what can we as individuals, as workers, as citizens, do to get ready for the sort of onslaught of AI that we're about to face. And a lot of the advice that I heard from experts at the time and that I got from looking back at previous waves of technological change, is that the best thing we could do was just to make ourselves more human, that instead of trying to compete with the AI, to try to code better or faster than the LLM, to try to like, be more efficient and more productive and gain an edge that way, that we really needed to figure out what unique human skills we had that the AIs couldn't do, and do those, I think at the time, I would not have predicted that the some of the first jobs to sort of be disrupted by AI would be creative jobs, because at the time, it seemed unlikely that AI would get creative or that the best AIs, pre Chat GPT on the market were still quite bad at doing things like writing or generating poetry or essays. They were much more useful for things like algorithmic recommendations, and so I thought that the sort of the data scientists and the engineers would be disrupted first. That said, I think that's happening. It's just, it's just kind of all happening at once now.
Allison 27:56
On the creative side, we played with it a lot, obviously, for design, but I was prompting it to give me an image of twins at one point for a post I was doing where one twin was having an easy time, the other one was struggling. And every single creative output it came up with was sort of like edgy, like wins, kind of, you know, you know what I'm saying. There's a certain type of twins you're searching for on the internet, and it's not safe for work often, and the LLM seemed to have, like, internalized that. So it took me ages, and I finally just had our designer do it. Our in house designer can just do things obviously that DALL-E and Midjourney sort of can't, and on that there are a lot of fun uses of AI too. Like I saw, you've been getting poker lessons from AI. What other use cases have you found that might be a surprise to people?
Kevin 28:45
So I use AI constantly for everything. I mean, I use it for work. I use it for my personal life. I recently started experimenting with what has come to be called vibe coding, which is basically where you use these tools to create software programs for little specific tasks or needs that you have without, in my case, actually knowing how to code at all. I don't know how to code, I don't know Python, I don't know JavaScript. I'm fairly useless as a programmer. But now with these LLMs, you don't actually need to. You can just go in and, say, build me a little app that does X, Y and Z, and a lot of the time, it'll do a pretty good job of that. So that's been my most recent set of experiments. I've coded, let's see an app to help me pack my kids lunch in the morning, an app to help my co-host, Casey, with his new hot tub, reminding him when to put in the various chemicals. Let's see, I've been doing a ton of vibe coding. Honestly, it's very fun.
Allison 29:46
You better get some use out of that hot tub if you're managing it, you know, he better have you over.
Kevin 29:47
Exactly.
Sean 29:48
What are you doing it on? Like, what are you, how are you building them?
Kevin 29:50
So they're a bunch of tools. I use a Cursor, Replit, Bolt and then Claude itself. The AI tool just has, like, a pretty good coding feature right inside of it. So some of them require a little more programming knowledge than others. Are a little harder for beginners to access, but any of them do a pretty good job at this point.
Sean 30:16
So you mentioned the reason why AI is being built, and all this money is going into it, and all these venture capitals are investing billions of dollars into it, is because for the fundamentals of AI at work. But obviously you're doing this for hot tubs, for school lunches. You were talking recently on your own podcast about how AI is being used for dating and relationships, and it just becomes part of the societal fabric that people are doing. I noticed when my 17 year old uses it like, it's more just kind of there and just this thing that they constantly are connecting to. They have, like, a name for it that they use, like in classrooms, so their their teacher doesn't know that you're talking about Chat GPT.
Kevin 31:05
And what's the name? Can you share?
Sean 31:06
It's like, Chattiqua.
Kevin 31:07
Chattiqua? Wow.
Sean 31:09
Yeah, and anyway, as we know, in the history of like internet regulation and like societal things like, it's like, sometimes these things, which, like you get integrated into the daily life of humans that begin to create the secondary issues that then become the things that policy makers actually get interested in the most. And so I'm curious about like, how you see, like, this broader play reconcile.
Kevin 31:36
I think the social side of AI gets way less attention than it should. I think we pay a lot of attention to the economic effects of AI, to the workplace uses of this stuff, but the biggest and most enthusiastic group of early adopters to this stuff has been young people. Has been students, has been teens. They started using it immediately after Chat GPT came out, because it was great at helping you do your homework, or in some cases, doing your homework for you. And I hear from teachers. I hear from parents. I hear from teens who say that this stuff is already making deep inroads into their schools and their peer groups. People are starting to have AI friends, girlfriends, boyfriends. People are chatting with AI constantly on apps like Character AI or Replica. They are entrusting AI, some of them, many of them, are using it as like a makeshift therapist, even though it's really not designed for that. So I think that this is going to be the next great culture war in America, when parents and policymakers realize that their teens are spending many hours a day talking to AIs rather than their human friends, I think there will. Be a huge freak out over this.
Sean 36:36
So it's like sex, drugs, rock and roll, and AI.
Kevin 36:59
Telling you, there will be segments on Fox News saying it's 10pm. Do you know which chat bots your children are chatting with? There will be hearings in Congress where Ted Cruz holds up a giant poster of a Chat GPT response and says, Mr. Altman, why is the chatbot going woke and brainwashing my children? I'm telling you right now this is going to happen. We might as well get ready.
Allison 33:19
In talking about the media landscape, you have your toes in a lot of different content ponds. You have your books, you have the Times, you have podcasts, social media layoffs are happening at a fever pitch at the same time, what does the future hold from your vantage point, and what can all of these different mediums learn from each other?
Kevin 33:36
Part of the reason I started podcasting was as kind of an insurance policy against what I saw as like real threats to the text based living that I had been making, not just AI, but like, people are just not reading as much as they used to. This has been pretty well documented. Like, sometimes we get data about the time that people spend on our stories, and it's very depressing, you know, because, like, you'll spend weeks reporting something, and like, the average person spends 37 seconds like scrolling through it, then they bounce off to the next thing. And so I just thought, well, I need to have, like, more tools in the toolbox. And so I started doing more video. I started doing podcasting. That was sort of my way of hedging against some of the the concentrated risk that the thing that I'd been doing my whole career would no longer be viable. And I gotta say, like it has been very fruitful. I really like podcasting. It's a much different type of connection to the audience. As you will find out when your podcast blows up and becomes the biggest podcast in America, people have very deep relationships with the podcasts being in someone's ears for an hour is very different than being in front of their eyes on a screen for 37 seconds. In terms of just the intimacy, the trust, I find that podcast listeners are much more generous than readers. It's much harder to dunk on a podcast just tactically like you, you have to, like, snip it out and post the clip, and it just takes a little while, people will will hate read something, but they will not often hate listened. And so…
Sean 35:16
Oh, I can change that.
Allison 35:18
I have a couple hate-listens in my rotation.
Kevin 35:21
And so I think it's great. I think it's really a great way to do storytelling. We try to make my podcast with Casey Newton, Hard Fork. We try to make it entertaining so that it's not just vegetables. So we're still figuring it out, you know, week by week, but I think we're we've landed in a pretty good spot.
Sean 35:38
When you get, like, the reaction to like, a big, you know, New York Times story, and then your reaction to a Hard Fork episode, like, how is that reaction different, and do you feel them as, like, complimentary, or are you just like, oh, I just love my podcast, people and the New York Times thing is more transactional.
Kevin 35:54
We just got some sort of survey data back from we did a survey of our listeners, and one of the questions that I was curious about is, like, how much overlap is there between our audience and the audience in the New York Times? And it turns out, there's, there's a lot, but it's, it's by no means 100% like there are some people who who sort of vaguely know that we're a New York Times podcast, or that I work at the New York Times, but that's not why they're listening. They're listening because it's me and Casey, and we're telling them about the tech news. But obviously the New York Times is such a powerful institution, like the the ceiling on a story is much higher in print for me than in on a Hard Fork. So you know, a story that does well by Hard Fork standards compared to a story that does well by like New York Times standards, it's just a much higher number of people who are reading the New York Times. That is a reason to keep doing both, I think, because I like the ability to flex into different mediums. And some stories work much better in one format than the other, just sort of the nature of of the medium. And so I like to have both, both weapons available, and then choose the one that suits my purpose.
Allison 37:06
Yeah, if you, if you write something for the Times, do you feel like you want to then go deeper on the pod, or is it more this didn't fit into the Times that it fits better with my podcast audience.
Kevin 37:16
There's some of that too. I mean, you can just have a lot more ideas in an hour long podcast than you can in a 1200 word column. But the column has some advantages too. It's durable in a way that podcast episodes are not always and people can sort of bounce off of it and react to it much more easily. So I think there are real strengths to both. I'm not one of these people who thinks that writing is dead or that reading is dead. I just think it's like their bar is higher the attention span is not what it used to be.
Sean 37:47
In addition to, like, using a lot of tools, you've also covered a lot of ground in your career. Like you've, back from writing about a book about Jerry Falwell to kind of the Merrill Lynch and the great, great recession, to crypto now to AI, like, how would you explain that common thread, meaning, you know you're you're visiting family, and they're like, tell me what you're up to now. Like, how did you get to here?
Kevin 38:11
I’m very interested in communities of people who believe something very strongly, whether it's religion, or people on Wall Street who believe very strongly in their version of the Market and the financial system, or people in crypto who believe that we are, you know, going to revamp that entire system and replace it with something on the blockchain. Or people in AI who believe that we are creating a machine God that is going to, like, take over and enslave humanity. I just find these people so fascinating. And I think one of the best things about being a journalist is that I can just get up really close to these communities and say, like, Hey, what's going on here? What are you talking about? What are you thinking about? Tell me how you arrived at your beliefs. And most of the time, people are pretty generous in sharing what they what they know and what they believe with you. So I would say that is a through line through my career. Now, did I plan it that way? No, but I think if I were to look back in retrospect and try to like, fit some narrative onto the decisions I've made in my career, that would be it.
Allison 39:13
It feels like a lot of the folks you write about have some zealotry to them, more obvious with the with the Jerry Falwell crew, but certainly the bankers in 2008 did too, and now AI, so it's interesting, when people arrive at fanaticism, that must be an interesting thing to write about.
Kevin 39:31
It is, I love people who live on the fringes, just what a great, repeatable trick to pull out of your sleeve as a journalist, to sit you know, there's this old saying, like, the future is already here. It's just not evenly distributed. And I think of my job as a tech journalist these days as going to the places where the future is unevenly distributed and reporting back to everyone else. Like, here's what the people in San Francisco are talking about that may not have made it to your city yet. Here's what's going on in the world of AI that you may not be thinking about yet. I find that so fascinating. I think it's really fun to do. It's also, I think, helpful, because sometimes you can learn from what early adopters and people on the fringes are doing, either because they're right or because they're wrong. I just think it helps, and honestly, like I do feel like part of my job as a journalist is just translation. You know, one of the things to go back to my very first book, which I wrote when I was in college and involved, like going undercover at this very conservative Christian school. As you know, myself a fairly liberal college student at the time, part of what I wanted to do was just to try to explain. Understand and explain this culture that was so different from my own. I think that there are a shortage of people in the world today who are actually going and spending time with people who are unlike them in some way, who are doing it with an open mind and not to score points or get likes and retweets who are just, you know, exploring being open-minded and curious like I think that is the thing that I love most about journalism, and I'm gonna keep doing it as long as someone will keep paying me to do it.
Allison 41:18
We have a question we end the podcast on every single time, and that question ties back to our title around attention. And you even mentioned previously how quickly people's attention is jumping from one thing to the next. So what is capturing your attention right now?
Kevin 41:34
So right now, I just moved into a new house, and so that is capturing all of my attention. It's like, how do I unpack these 700 cardboard boxes? But the thing online that I would not have expected that is capturing my attention today, and you're gonna laugh at me, is Facebook Marketplace.
Sean 41:52
Wow.
Allison 41:53
That, I did not expect.
Kevin 41:56
So, I have become obsessed with Facebook marketplace, so I have not used Facebook regularly in many years. I thought this platform is dead. It's only boomers posting AI slop. But you know what? I went on to get some stuff for the new house, and it is better than Craigslist. It's there. You can get anything you want. People are selling whatever. And it tickles a part of my brain that I think, like, enjoys gambling or getting a deal or something because now I am, like, deeply addicted to Facebook Marketplace, and I feel so freaking lame admitting that on a podcast. I'm sure it will pass.
Allison 42:31
We can get you help, Kevin. Don't be scared to ask.
Sean 42:35
Alright, Kevin, well, I appreciate it. Thank you so much for this.
Allison 42:40
Thank you for joining us today for Attention Shift for today's deep dive on communication strategy. Check out our sponsors, Delve, at Delve.news and Mike Worldwide at MW.com Please like and subscribe to attention shift on Apple, Spotify, or your podcast platform of choice, and we'll see you in a couple weeks for our next episode.