In an uncertain world where AI technology is disconnecting us more and more each day, we are dreaming of a world where technology connects us. The premise of this podcast of simple: let's talk to people and imagine the most hopeful and idealistic futures for technologies that connect and do good in the world.
AI-tocracy (00:02.326)
We are on the line today with danah boyd. danah, how are you doing?
Danah Boyd (00:05.977)
Good, how are you?
AI-tocracy (00:07.372)
I'm doing good. It's pretty-ish here in Colorado where you currently are, although you're about to make the move out east. So hopefully, I assume the summer weather over there is going to be delightful as well. Yeah, exactly. You get a little bit more humidity out there than Boulder, Colorado. danah, let's start where we always start, which is what is something particularly inspirational to you, maybe in your life, maybe in the world right now?
Danah Boyd (00:09.444)
you
Danah Boyd (00:20.869)
Hot, hot, and hot.
Danah Boyd (00:35.621)
You know, I thinking about this because it's, you know, in some ways there's not a lot to be excited about. It's a very terrifying time. There's so much going on. And normally I respond to this by watching a lot of cat videos. So I have been watching a ton of cat videos, doing ridiculous things for which I'm deeply grateful. But in some ways I got fed up watching cat videos and somehow the algorithm started feeding me fun other random things. And one of my favorites that I ended up down a rabbit hole that was totally inspiring was a pair of students who a decade ago ended up
creating a fascinating innovation. And they ended up looking at how they could use sound to block fire and to turn off fire. So I ended up in this amazingly weird, inspiring rabbit hole of these two guys, Seth Robertson and Viet Tran, who were looking at sonic booms as a way of putting out wildfires. thinking about Colorado, I'm always thinking about wildfires. And I was just like, this is just exciting, seeing students who are just like,
I see a real problem in the world and I am going to find a new path forward. I was like, I love it. And so I'm very grateful for weirdo algorithms that will feed me innovations from students, those that get me excited about the world.
AI-tocracy (01:48.246)
Yeah, well, let's talk about algorithms. So you're when I talk to people about about danah Boyd, I'm like, danah's danah's a forest like danah's always on the cutting edge of like what is is happening out in the world and what's coming next, like your social media work. I cited a bajillion times in the academic world. I know you've done some work now on the census and on democracy and civics, but now we're in this world of algorithms and the cutting edge and a lot of the stories that I keep hearing are
Well, this is inevitable. Like, yeah, there are some tech people, Silicon Valley, they own AI, Anthropics coming on the scene, like artificial general intelligence is coming along and like us as everyday laypeople are just along for the ride. And so what we wanted to talk about today is what agency do we actually have in this world of emerging tech, AI, these conversations about AGI? Where is our agency and maybe how do we reclaim it if we don't have it?
Danah Boyd (02:46.883)
No, it's such an important question. Maybe we'll unpack it with two different component pieces. first I want to say, how did we get to this inevitability rhetoric? And we are hearing it everywhere, right? The jobs are all going to be upended. Everything's going to be different. And we hear both good and bad inevitability frames all over the place. And I'm fascinated by them because they have such, you know, political power in this world. They're an opportunity to say, I know the future and therefore you should believe in me.
I'm gonna tell you what the future is and now everybody needs to get in line under the terms that I've described. there's moments where we see this occurring for both political reasons, Like traditional partisan politics kind of ways, but also business reasons, right? Which is that when companies can tell you that the future is inevitable, then you feel like you need to jump on board and get into their plan.
And I often think back to a very classic scholarly concept that is usually not thought of in these terms. And it's this idea called the social construction of technology. And it's this way beautiful idea from the science and technology studies space to try to say, how did we get a bicycle with two wheels on it? Why was that the innovation of the time? Why didn't it have three wheels or four wheels? Why did you sit facing forward, not sideways? Why didn't you?
you know, why were those wheels most of the same size? And actually at the beginning of the development of this, you know, the bicycle, there was a huge contestation over what that technology would look like. And that period is described within this theory as interpretive flexibility, right? The idea that we don't quite know how the technology is gonna work out. So its interpretation is quite flexible. But there's of course these
competing pressures that try to create what that theory calls closure, right? Which is try to create some sort of stability around what this innovation is. And the reason for stabilizing the technology is sensible. You want to make certain that you can mass produce bikes so that they're cheaper, so that more people can use them, right? There's a lot of different kinds of reasons, some of which are about certain people winning out because they don't like their competitors, some of which are about really thinking about the public in general.
Danah Boyd (05:04.047)
So in this period from interpretive flexibility to closure, there's a lot of contestation going on. And so what the inevitability narrative is at this moment is saying, ha, this AI is future. This AI future is very clear. We know what it looks like, right? We're going to tell you what closure should look like. We're going to give you that rhetoric because we want to close off this moment of that interpretive flexibility. And the reason to do that closing off is because
That way the winners can win, right? Like that way the big players who are owning the technologies that are the, you know, the big names. And so of course they're asking for regulation. Like that is such a playbook from, the 19th century innovation crowd, right? Like 1880s was all about regulate so that my competitor can't possibly operate and the terms get set. Well, one of the things that, I like to come back to is thinking that actually there's a lot of power in keeping things.
flexible and keeping the interpretation from stabilizing. And that is a power that we all have. We all have say, I don't want that future. I don't like your future. I'm going to open up and expand that future. I'm going to create alternate futures. I'm going to challenge this one definition. And each one of us, as we make choices of what to adopt and how to challenge the system, are trying to dictate other futures, right? Trying to say, I want my future to look more like this.
And those futures are contested, right? We don't all agree on what that future should look like. And so this is why these stakes are in many ways political and governance stakes. Like they're not something that are simply about the best technology. And the place I like, you know, that's sort of closer to us, or at least for those of us who are sort of older, is there was this weird period in the eighties where there were two competing technologies about how to...
be able to view videos before we had live streaming and all the forms. You know, and these two different versions were two totally different technologies. And one of the things that happened was there was a unquestionably better technology that nobody will ever remember, right? And that technology, that Blu-ray technology got completely written out of the entire market by VHS, another technology that we thankfully don't have to remember. But VHS won out.
Danah Boyd (07:24.793)
because a lot of people with power and money came to say that this is the technology that will win this. You know, there's used to be these stores called blockbuster, right? Or everybody came and got their VHS things, but that was the technology that won, not because it was the best technology, not because it was even great for, you know, the creators and the artists, but because there was a lot of invested interest. And so this is why, when I think about these issues and I think to our listeners today, it's like,
Think about the futures you want and think about the arrangement of actors at play and how are they defining those futures and what role do you play in trying to disrupt or rearrange or change those futures?
AI-tocracy (08:04.12)
Yeah, always, whenever we talk about Blockbuster, I'm always like, that was yesterday, right? Like the last Blockbuster, that was yesterday, right? It was not, it was was not. I will not date myself and how many times I went to Blockbuster during a week. I...
Danah Boyd (08:12.079)
Sorry.
Danah Boyd (08:21.881)
I used to get their posters, when they had the full-size cutout posters, I as a student would be like, please can I have your poster when you're done with it, please? And then I would put it up in my high school and college rooms.
AI-tocracy (08:36.482)
How many did you amass over your Blockbuster poster collecting time?
Danah Boyd (08:40.077)
A lot, and my mother was very happy to get rid of them.
AI-tocracy (08:42.734)
Well, I'm I'm I'm curious about hope is I think really where and and I guess I guess also the how to separate out the hype of a company like Anthropic saying hey this technology is Changing our world and there's nothing you can do about it That's that's a paraphrase, but I imagine someone like Sam Altman has said something similar at some point and yet
With all of that hype, there is also a reality that those companies are continuing to build for us where these tools are necessary. Like ChatGPT, if you don't use it, then you are missing out on something, say in terms of productivity right now. Or maybe that's just the story. So for you, how do you separate out the reality of the futures that are happening right now versus the futures that we want and building those futures that are not here yet?
Danah Boyd (09:38.416)
Yeah. So part of it is that for me, I, you I'm always seen as a technology, you know, apologist at different times, which I always find really, really funny because, you know, I am definitely critical of where these technologies come from and what they do in this world. But I want to start with an empirical base. Why are people drawn to them? What makes sense to people? What are they trying to fulfill? So let's look at some of the conditions in which people are drawn right now to AI.
dominant use of these LLMs at this moment, the large language models that are, you know, when we talk about chat GT and AI at this moment, the dominant use is glorified search, right? It's basically like, I need an answer to something and I'm don't really want to go to Google because I have to wade through so much crap. Can you just give me a fast answer? That's fascinating because what's happening is not that people are like, my gosh, LLMs know better than Google. It's that
Google has degraded so badly through its own economic interests and like I'm getting so much flooded with crap and the open web is such a mess of grossness that maybe just maybe this new tool can give me that. Now does that mean people are thinking much about how those things came into being? No, right? They're not. They're not thinking about stolen data. They're not thinking about who had a right to access things. They are.
warily asking whether the information they're receiving is accurate. But there's some wariness to this, right? Which is to say that like, I kind of want it to be right. Maybe it's as right as Google, but like, I really don't have to overthink this. So please just give me the right-ish answer. And I'm hopefully this will all go around. That is the dominant, dominant use. And it's a reminder that people are trying to access information. Okay, so then let's start to look at the other uses that we usually hear as narrative, because honestly, the fact that that's the dominant use,
I mean Google made a lot of money by getting creepier and creepier. And I guess maybe I should put a big asterisk in here. There's an economic arrangement that is at play with each one of these technologies that we need to acknowledge, which is one of the reasons why jumping to the next thing, the next thing, the next thing right now is often sensible from an average person's response. Because the thing is, the next thing is heavily funded by venture capitalists, and so you're not actually paying anything resembling the true cost.
Danah Boyd (11:59.258)
You're not paying the true cost of the environmental consequences. You're not paying the true costs of running that platform. You're not paying the true costs of actually being fair in terms of accessing that data. None of that's at play. So the result is it's relatively cheap. Just like it was great when Uber came out at first because it was so cheap because you were not paying a fair rate for that, for that tax return. It was there to undermine the competition. So the same thing is happening in AI as you have like Google.
who's both being undermined and trying to jump onto the AI bandwagon. And it's kind of a mess because, you know, remember that half of a public company's wealth and worth is connected to the speculative markets of the stock market. So there's a lot of economics offsetting this motion, but people are just like, okay, I want to have a better search. So it's just useful. Next thing they're doing is they're like, okay, I'd like somebody to clean up my text in any number of ways, right?
Can you please write a letter to my landlord because I'm really bad and I'm so angry and I know that that voice is not a good one. It's a totally reasonable task, right? You might already have somebody else, your friend, your neighbor, write that for you. So you see all of these like everyday regular tasks. Now the question is how much of these tasks worth the billion billion dollars, right? and that's where we actually start to see, I have to consider like what's going on economically around that because that's where
you know, the issues start to be, get a little weirder. So consider for example, the various forms of visual AI slop all over, Pinterest, Instagram, choose your favorite, you know, video and image based system. Well, people are playing Irish Rats because it's true. Like at the end of the day, do I want to look at, you know, the cats that are animated by AI? I'm like, at that cat jumping off the high dive board. That's so cute. And then I'm like,
Why am I watching this? Right? This is, this is total brain rot. And so these moments of both like what is appealing to you, what feeds your cotton candy brain, who's just like, yes, please give me candy. versus like, maybe I should have a healthier diet, you know, or constantly and fight over certain kinds of visual elements. But this is what people are playing off of, right? People are making these images, trying to convince you to click on that because there's an economic incentive, both for the tech platforms and for the creators.
Danah Boyd (14:25.059)
Right? As you have a whole ecosystem now of creators who are making AI slop because they're hoping that you'll watch yet another, you know, unboxing video of the current era. Right? So these are these things that we see as a, as a constant cycle there. And the same thing is happening in companies like we want you to be more efficient. We think you're failing to be efficient. I always find this funny that that is the story of labor, like throughout history, there is nothing new about this. And so where is AI actually making people more efficient? That's a.
big, big question mark. There's a lot of big question marks there. But where is it also being used as an excuse for management to come in and either lay people off, right? As a cost-cutting measure and use AI as the fear, right? Which is like, you got to work harder because you might be laid off because of AI, right? That is a fear-based approach to management. Again, not new, right? And where is it actually like?
One where it's like really motivating people to learn or make sense of things so that they can integrate these technologies in a productive way. And there are some companies that are very healthily being like, look, these technologies might actually help with the work you're trying to do. Maybe if you learn to integrate them. And so this combination of carrot and stick that is going on is pretty ever present. And it has happening right now at a moment where let's be clear, we're still recovering from the COVID pandemic in terms of the workplace.
And especially if we're talking white collar work, let's just look at white collar work, which is where the disruption of LLMs is sort of most in conversation. People went to work from home. They stopped having relationships with people that they worked with. love, there's this great set of scholarship by a team at Microsoft that actually looked at how during the work from home era, there were people who were part of strong tie teams, like really closely in a team.
They came out of the pandemic stronger than ever, really bonded, really able to work effectively, just tremendously connected. But the weak ties of the company, the relationships that you met in the cafeteria and then you got to know disintegrated entirely. And so this is actually really notable because it meant that the short term gains of these white collar institutions were extraordinary, right? Because when strong teams
Danah Boyd (16:41.517)
are really strong and effective together. They just go and they execute and it's amazing. But it has a long-term cost because it's actually those weak ties, relationships that you build over time through different events that you interact with in the kitchen. Those who allow you to be more innovative within an organization allowed you to actually bring on new ideas and incorporate new perspectives. They allow you to differently move up the ladder.
So there's a reason that all these AI companies that we're talking about are actually mandatory in person. All of them are, right? Like open AI, anthropic, all of them mandate in person because they understand that those weak ties are so critical for things. It's not because they want to surveil you to be more effective in your job, right? Which is what we see in say more traditional blue collar situations. It's because they need that soft weak tie relationship. And so I put this out because there's a lot of these labor.
conditions that are actually both real and unreal at this moment and they're setting in motion other people's agendas and other people's justifications.
AI-tocracy (17:52.258)
The other week, maybe a month ago, we interviewed Morgan Shoyerman and Mary Gray. And one of the things that came up in that conversation was, well, what has changed? What's changed in the AI ethics space, but also what has, is this different? And one of the things, one of the phrases that you just used is, this isn't a new thing. Parts of this are not a new thing. Some of the labor forces are not new. I'm wondering, as you're making sense of everything,
going on because we're covering a lot of ground. Is this AI thing, maybe the stories that we're telling about AI and agency, is this a new thing that's unique to AI and to LLMs? Or is this something that we're seeing as a cycle or something that's repeating from past, say, the invention of the internet?
Danah Boyd (18:41.317)
I mean, I think in many ways, AI is multiple things simultaneously, which is what makes it so murky to have these conversations. And so I am interested in both what the technology does and doesn't do. And there's parts of the technology that are truly novel and exciting and curious making. But we can feel that way about nearly every technology, right? Like, you're like, that's kind of cool. And maybe that changes my behavior this way. It does something different this way. And like, isn't that nice? But then there's AI as an industry, as a sector, as a set of politics right now.
And that's where I think it's dangerous to sit there and be like, it's all new. Because actually we've been building these dynamics for a long time. And so I keep thinking about how, you know, in the earliest days of COVID, right, we ended up with, you know, a handful of large technology companies suddenly amassing exorbitant amounts of capital. Right. And the thing is, is that they had done this actually at a time where frankly the tech industry had kind of reached this
lull stage, it was struggling even before we went into the pandemic. was, know, social media had been the next thing and now people were, were, you know, doing small modifications to social media this way to that way. People were not sure what the next big thing was. And people were making these different kinds of bets. And in many ways, the tech industry was focusing on three bets simultaneously before the, before the massive influx of money, but also once that happened.
And that was the metaverse, right? The idea that we can all go and be virtual always, fascinating. Cryptocurrencies, right? Which is like, we're gonna upend our whole political economy through new ways of exchanging money. And then AI, namely the generative AI stuff that we're talking about. And the fascinating part for me about this is that we actually ran through the crypto and...
crypto and Metaverse first, which I thought was really interesting because you could see the technology around AI was just not mature enough. And so we were like, okay, Metaverse, let's change an entire company's name to show that's the investment. That's really awkward. And the public looked at this and went like, I don't think so. Like that might be nice for gaming, have fun, but no thanks. And then crypto, which was like, came from a much more esoteric folks where people were like, what does this even mean? How does this work?
Danah Boyd (21:04.527)
Can I make money off of this, please? Let me figure out how, you know, blows up spectacularly through various forms of fraud, right? That everybody goes, maybe I, that whole thing sounds like a pyramid scheme to me. As you saw people backing away from that, even though there's again, interesting technology in both of those. And then here comes, you know, AI is like, hey, we're the new shiny thing. And honestly, if it was not for the, you know, you
the first demo of ChatGPT, the public mostly probably wouldn't be like, uh, sure, go do your AI fantasy thing. But there was something about that first demo that gave people like, oh, I can have imagination here. This is kind of fun. I can do something silly. This is playful. Right. And they were so desperately craving, the public was so desperately craving, effectively, an animated cat. Right. Like they were just like, I need something just.
playful and fun, and this will make stories that make me laugh and like, that's great. And so there was something about that moment at that time, we're still in the thick of different kinds of consequences around COVID, know, politics around the country or the sort of the world at that time are really, really on edge. And this was just small acts of pleasure. And so that was a perfectly timed demo, right? Like what, you know, we often think of it historically as like the perfect demo.
and it also happened at a time where it's not just that these handful of big tech companies had stupid amounts of money, but venture capitalists with their large, you know, funders in different ways, which let's be clear, this includes like the pension plans of major governments. This includes the pension plans and the endowments of universities, right? They've got so much capital that they're desperate to find the next thing and get an ROI.
that is beyond, as inflation issues are starting to emerge, to get an ROI that actually can get them out of that financial position, that they're ready to jump. And the combination of those things meant that massive amounts of capital went straight into the AI space. And it's distributed across the AI space. We talk about the bigs, right, who are building the models, who are betting their futures on the ability to pull off the model that everybody will want to use.
Danah Boyd (23:22.405)
At the same time, there's a bazillion and one little smaller companies that are trying to use whatever models are available to try to find new businesses. And each one of them is obsessed with trying to achieve a hockey stick ROI, right? At a moment where things had really stagnated. And so that for me, it's not, that's not AI specific, right? Like that, and that's not at this moment specific. It's a, it's a combination of forces that had gotten to us.
to this place politically and economically where this perfect demo becomes an option. And now we're in a very weird place because people have responded to that perfect demo with either massive like I'm all in, let's do this thing, let's find a way, woo hoo, or serious abolition, extreme, like don't ever allow this system to go. And this is where it's really tricky. The reality is going to be somewhere in between, right?
Some aspects of this will be useful. Some aspects of this will be massively used. What they will be used for and exactly how that we will make money off of that. Still not entirely clear right now. It's amazing that subscriptions are working. No one thought subscriptions would ever be able to work on the internet again. boyd are people subscribed to these things, right? And so subscriptions are working again. Some people are trying out new business models, but we're in that moment of complete strange flux. And so this is where it's just like,
I don't think of it as new, I think of it as a set of constantly shaped forces that have collided in this particular configuration at this particular time.
AI-tocracy (25:02.338)
I wanted to go back to play and silliness because it's not something I've thought about as a selling point.
of these LLMs. In some ways I've been thinking about it like web 2.0 or 3.0 where you have like say YouTube where I can make my own content. I can make my own podcast here. I don't need to go through a record label or CNN or whoever. And it sounds like say making my own character AI girlfriend or something similar like that. There's a set of stories and a feeling of agency that I have in being able to make my own thing and make that thing easily. So there's an instant gratification.
there could you tease out some of the play or silliness and how that relates to maybe the adoption of this tech versus some versus crypto for example
Danah Boyd (25:47.63)
Yeah. So like one of the most mundane uses of actual generative AI that I found fascinating and talking to people was parents who are giving bedtime stories. Because it turns out that as much as you want to be creative at the end of the day, it's 8 p.m. You want your kid to go to bed and you're just, you don't even know how to make up a story anymore because it's just gotten to that point, right? And here's this little device that's like, tell me a story with a happy ending.
about my kid, like just give me an 800 word script, please, I beg you, please, please, please. And it's, you know, they can be funny, they can be joyous. Kids are creating their own little ones, right? Like, and I think about it, like, you know, my kids are obsessed with these dreadful books that are obsessive over Minecraft, right? Like Accidental Minecraft Family, which was recently turned into a movie, which is a whole other challenge. And all of these other, you know, just generated AI stuff. I don't know.
Who's made this Minecraft content? I do not know if it is real, if it is AI generated, if it's a group of people making it under one author, who the heck knows? That stuff has existed for a long time. We just didn't call it AI Slot, we just called it junk food content. And so here it's like, where does junk food content actually play a role within society and how can you actually generate it under your own terms? And it's playful. And also, we had a field day, my kid at the
at the time of Pesach was obsessed with Hamilton. So we asked one of the systems to just convert the Hagedah into one that had Hamilton songs throughout it. my gosh, was that much more fun, right? And it got the kids actually engaged in Pesach, right? Like the Seder, the Seder had new forms of joy at a moment where the Seder was not really going to have a lot of joy. mean, there's nothing really joyful about us. they're, well, Seder's are complicated, right?
So like, what are these moments where you can just, like you want to be more creative, you want to be more creative and ideal version of yourself, but you can't, but you can laugh and find joy at these things that can actually play with you and that are interactive. And I'm always amazed by this because like, and I'm specifically amazed by people who are not trying to make money off of it, right? They're not trying to make content to suddenly put up on Instagram to, you know, attract attention. They're just trying to have fun for themselves, right?
Danah Boyd (28:11.949)
And those little moments, I think we need to recognize the beauty in them and why people are willing to also pay for them, right? People are paying subscription fees, you know, for Dolly to make them fake cats, right? Because they can't get enough of their cat hits on, you know, Instagram or on TikTok or whatever. Like, why? Like there's something relieving and pleasurable about that.
And so I think that's one of the reasons, like, looking at those places where everyday people can do that, right? Because, I mean, many people have tried desperately to take a picture of their own cat to terrible avail, right? Like, you're always like, how does the internet make good cat pictures and my cat always looks like kind of a monster? And, you know, doesn't pay attention, doesn't definitely not, like, you know, has the weird red eyes. Like, there's just so many layers to this. Where just like finding those places of pleasure.
And of course there's also politics and others, like people playing political games. Like one of the reasons I think we have to acknowledge the pleasure of politics, right? How many people made various AI videos of political candidates, right? Doing ridiculous things. Not to convince anybody that this was real, but just because it made them laugh, right? It made them laugh. All right. And of course there's versions of this get really creepy, but we have to acknowledge the places where people just...
are looking to have moments of joy that are not the latest version of the same story that Hollywood has been making on repeat, hoping that it'll extract more money off of you and more money off of you, and something new and weird and different.
AI-tocracy (29:50.658)
Well, I'm looking forward to your Minecraft movie class at Cornell. I imagine this is a plug for the chicken jockey thing. But I am I want I want to introduce a new segment based off of this. I'm going to call it this. We're piloting it right now. But I thought you would be an interesting person to answer. I'm going to call it three things. And I was wondering if there are three things like ideal next steps or outcomes or five
Danah Boyd (29:59.601)
dear.
AI-tocracy (30:20.108)
years in the future of how you see technology evolving. Three hopeful things that you have. Maybe we'll call it three hopes. What are three hopes you have? And you can be pragmatic if you want, but that's less fun. What are like the ways that you would love to, that you dream of, that you would love to see this technology, maybe LLMs or technology in general go towards?
Danah Boyd (30:44.965)
The way that I'm trying to think about it is that I'm going to take a deviation first and then I will get there. So there's a Catalan scholar by the name of Manuel Castells. And one of the things is he sort of looks at the history of media and he argues that there's a lot of power in how media operates in the world. And he's interested in the different layers of that power. And the first layer is like, the power exists within the networks. Then
AI-tocracy (30:49.912)
Yeah, go for it. Deviate, deviate.
Danah Boyd (31:13.733)
how their networks, how their structured matters and that shapes the power. And then people have agency and they can take action and respond and move and shape across those networks in important ways. But of course we also have to recognize that we create the conditions for certain actors to have an exorbitant amount of power in shaping the entire network. And he was critiquing Murdoch in his original analysis. For me,
The important thing to understand about AI and what's going on here is not centering the technology or even the CEOs. It's centering the whole arrangement of actors and what's going on. And so when I have hope, when I think about what I want out of futures of technology, I want to make certain that those arrangements, those actors as they go can be nudged towards outcomes that are more equitable, more just.
more empowering, more, you know, focused on joy and less about extraction. Right. And I don't see the future of AI as inevitable, but I do believe that the current configurations that we have set in motion, politically, geopolitically, economically, are making a future that is really toxic, far easier to come to than one that is really joyful. So if I'm going to take your hope narrative, I'm going to take hope.
not as like, I'm going to close my eyes and hope that this outcome is going to occur and say more like, how do we have agency here and, and move things towards spaces and the way that spaces and outcomes that I think make for a better society. Right. And so the answer for me here is about how we start changing the capital arrangement. Right. And I have hope that we will start to see different kinds of checks to cap.
And unfortunately, I think we're probably going to see it because things are going to get toxic. But I have hope that we will get there and that we will get to a point where we can actually check those powers. I also have hope that, you know, there are a lot of people that I encounter that I kind of am a bit put admiring of who are using these tools to genuinely break through, you know, places where they got stuck.
Danah Boyd (33:36.71)
And I think about these dynamics of stuckness as a really interesting inhibitor towards different kinds of creativity and innovation. And so these kinds of unsticking that actually get me hopeful happen across many different domains. Like talking to more traditional filmmaker types who are not using AI to generate a story for them, because that's dreadful and boring, but they're using it to unstick themselves to result in all sorts of weirder stuff.
And I'm like, Ooh, what are we going to get from that? We're talking to, you know, different kinds of scientists working, you know, in the biomedical space who are using AI to unstick. like, what kind of true innovations are we going to see? You know, I started this conversation by talking about these two college students who figured out how to use sonic boom to stop fire. What unstuck them?
Right? How do we use these technologies to unstick people to actually try to make the world better? So that would be my second area of hope, right? First being sort of getting to an economic arrangement that's not nearly as toxic. Second, like how do we unstick people? Because that's kind of cool to see that. And I think the third area for me is really trying to deal with joy and try to deal with pleasure. And I often go back to, you know,
Andre Brock is an amazing scholar who thinks a lot about like how to use technology just to have fun and what it means to live and breathe and experience joy and I say this because at a moment where we've got you know multiple You know nasty physical wars crazy cyber wars Panicked, you know futures around climate all sorts of things that we can lay out our toxic, you know
I spend a lot of my time looking at young people who are struggling with mental health. You and I have talked about this. I want to see and I have hope that different opportunities are going to arise through this moment to actually help people find joy in new ways, right? Creativity and pleasure and curiosity and new ways of connecting to people. And again, that is not an inevitable future, right? That is not guaranteed, but it is something that we can collectively work towards and try to enable. And again, it comes,
Danah Boyd (35:58.906)
down to the arrangement of the first, right? Which is the economic conditions make certain kinds of pleasure possible. At the same time, if we've learned anything through history, even in the darkest moments, people do seek out joy and they try to find ways to have joy. And so let's follow their lead and see where they're going because that's where I get super excited.
AI-tocracy (36:19.916)
We are moving towards wrapping up, but I did want to ask a question. So we're recording this on Tuesday, June 24th. There's a lot going on in the world today, especially around Iran and Israel. It, I think is really easy to unplug or to play around and have play be an escape element. I was wondering how you thought about the difference between like,
using technology, using this technology for joy versus escapism versus like hedonism. Like, do you think about the differences there? And especially is escapism and pleasure a bad thing in this case?
Danah Boyd (36:57.103)
So...
Danah Boyd (37:00.997)
I think we have to acknowledge that in war, people have also always found moments of pleasure. We don't tend to highlight those when we go back in history, but like, know, anybody who's got family members who survived the Holocaust will get to hear stories of small acts of joy and escapism and ritual and all of these other things as a form of individual resilience in a trying time. And so...
I think that we have to acknowledge that both the Iranian public, the Palestinian public, the Israeli public, none of them want to be at war. None of them are enjoying what's going on politically. Right? And so the result is, that even as we call on them to speak up, to stand up, to take political action, to demand changes of their government, to do all of these things that we take much more serious, how do we ensure that they are individually resilient?
Right? And that they can hold steady in a time that's really trying. And that's, think what I learned from doing a lot of mental health work is it like, you can't make political change when you're yourself struggling. You need to be able to build resilience and pleasure and joy are how you build resilience. Right? And there are ways of connecting and finding laughter amongst each other, finding humor in the absurdity. And so I'm not asking people to just escape from the dark times.
but I'm asking them to be more resilient and to live, right? Because what's the point if we're not trying to live? And that is where the joy comes in. But we do have to acknowledge that some people do escape and there is a hedonism as well. Of course, these things that you're playing at are real. So we can't say, all pleasure is inherently good. But again, that's what we can learn. like one of the places, BDSM is a classic place to learn those tensions, right? There are other areas.
that move us into different dimensions that don't even have anything to do with technology that show us these balancing forces. And so how do we encourage people themselves to navigate them and to find that so that they can be resilient in the hardest of times? Because we are gonna call on them to make change, right? We are going to ask people to rise up, but they can only rise up if they're whole. They can only rise up to make change when they themselves feel stable and confident when they're alive.
AI-tocracy (39:27.662)
Yeah, you're reminding me when I used to work in the hospital as a chaplain and I used to work with a lot of older adults and Holocaust survivors. This was in New York City, mostly in Brooklyn. And I remember the first time I was talking to someone who had survived the Holocaust. He was probably like 95 years old and he just kept making jokes and he kept telling these jokes about his experience and he wasn't making light of it. I was like, well what?
Tell me more about these jokes that you're telling me. He's like, that's the only way I can make sense of it. Like that's the only way that I can find healing in what happened to me previously. I'm going to make a really sudden shift out of this conversation. There's no, there's no graceful way to make the shift, but I do want to close with the question that as always, I point out, I stole directly from Ezra Klein and his podcast, which is what is a piece of media? Maybe it's a book, maybe it's a piece of music that you would recommend to listeners.
Danah Boyd (40:19.173)
So I sort of went back and forth and debated myself and I decided I would give two books for you. The first is actually responding directly to our conversation and its sort of seriousness, right? Which is Catherine Bracey's World Eaters. And it's an attempt to understand why when we've created the conditions where venture capital is the only financial capital for innovation, that actually undermines so many opportunities of innovation. And so it's a good calling on us to rethink our financial considerations.
in order to dream of different kinds of futures. The second one I'm gonna give you is actually more in the sort of fiction short story kind of way. And this is some 40 Tales from the Afterlives by David Eagleman. And I especially thought you would love this because it is 40 different visions of what happens after we die.
And one of the reasons why this book is so lovely to just sit and think with and read each one separately and slowly let it melt into your brain. Some of them are about technology and about different technologies that will help us address the afterlives. Some of them are about faith and some of them are just about how to live a good life. And I think that this book in general is just like, it's such a great meditation. so.
For those who are just looking for finding different kinds of joy and pleasure right now, like meditating on things that are both serious and joyful just feels like the perfect ending.
AI-tocracy (41:47.894)
I love that. Well, and on the note of endings, unfortunately have to close out our conversation. danah, thank you so much for joining us today. It was a pleasure.
Danah Boyd (41:54.928)
Thank you for having me.