In an uncertain world where AI technology is disconnecting us more and more each day, we are dreaming of a world where technology connects us. The premise of this podcast of simple: let's talk to people and imagine the most hopeful and idealistic futures for technologies that connect and do good in the world.
Welcome to the AI-tocracy podcast. I am your host, Dylan Thomas Doyle, and welcome to AItocracy. I know every single person has a podcast right now, and every single podcast has the word AI in it. But here, we're trying to do something a little bit different. We're trying to lift up some of the voices that might not be being heard right now in the AI space, the AI governance space, or beyond.
Dylan Thomas Doyle:Basically, anywhere that AI and power are intersecting, we are trying to cover those stories. And this is our first episode. Now you may remember me from the radical podcast, and you may remember the Radical AI podcast as having a wonderful episode with Emily Bender. Well, it turns out Emily has now written a book alongside Alex Hanna. Alex is also a friend of the show from the Radical AI podcast, Days.
Dylan Thomas Doyle:And so for our first episode, we wanted to start with their new book, The AI Con, how to fight big tech's hype and create the future that we want. In this episode, what we cover is The AI Con. We talk about this awesome book that everyone should go by right now, and we go into detail about different case studies of ways that AI is getting away from us, about the stories that we tell about AI and how those stories might be harming us or might just be getting a little out of control, a bit ungrounded. I love this conversation. It was great to talk to Emily and Alex.
Dylan Thomas Doyle:And I should mention that our guest host is also another wonderful, amazing person, Calvin Liang. And in this conversation, it was awesome for me personally to hear Alex and Emily and Calvin talk about their perspectives of AI and hype and what the con of AI is right now as it's showing up in our world. So I'm so excited for this to be our first episode. Stay tuned for weekly episodes coming out for AItocracy and this being the AItocracy podcast. Welcome, and enjoy the ride.
Dylan Thomas Doyle:We are on the line today with Emily M. Bender and Alex Hanna. Thank you so much for joining us, both of you. It's great to have you on the show.
Emily M. Bender:It's a pleasure.
Alex Hanna:Thank you so much, Dylan and Calvin.
Dylan Thomas Doyle:And as you mentioned, Alex, we have Calvin Liang on the show. Calvin, hi.
Calvin Liang:Hi. Hi. Great to be here.
Dylan Thomas Doyle:And Calvin, who is an amazing scholar in his own right, but also just a wonderful person, will be joining today as cohost as we talk about this new exciting book of both of yours, The AI Con, How to Fight Big Tex Hype and Create the Future That We Want. But before we dive into this awesome book, I wanna ask the question that we ask everyone at the start of these episodes, which is what is something that is particularly inspirational to you about your life or maybe about the world right now? And Emily, let's start with you.
Emily M. Bender:It is a moment where we desperately need inspiration in the world. And I think that what I'm finding is inspiring is people who, in the face of all of the rampant destruction of our systems of society and laws and everything are inspired to create something. You know, is it is it a song? Is it a zine? Is it a protest sign?
Emily M. Bender:I'm taking inspiration from those acts of creation.
Alex Hanna:Yeah, I think a lot of folks in the face of state dispossession are creating alternative systems of support. So mutual aid groups are something I take a lot of solace in. There's a lot of folks around the Palisade Fire in LA. There's a lot of mutual aid groups in that destruction. And I even though those are born from destruction and climate change fueled destruction, these are places where people are trying to envision kind of local initiatives that are different and mutually reinforcing and reestablishing and reemphasizing our humanity.
Dylan Thomas Doyle:Well, I think that's a good tie in to begin talking about your book. And I think my question is, what is the AI con? And, Emily, let's start with you again.
Emily M. Bender:Thanks. So basically that title very briefly captures the heart of what we're saying, which is that it's all fake, right? The goal is fake. The technology, the claims about technology is fake. The harms are real.
Emily M. Bender:There's some fake imagined harms, but there's real harm being done. And so it's a con that this is going to make everybody's lives easier. It's a con that this can replace thoughtful work by a human. It's a con that it's actually doing all that without human intervention. And it's harmful by being a con, but not harmful in the way that we're told to be afraid of the technology.
Dylan Thomas Doyle:Alex, anything you want to add about the AI con?
Alex Hanna:Yeah, I mean, yeah, there's just adding on what Emily is saying, AI itself is not a coherent set of technologies. It's not one thing. It's something that's been pasted on many different technologies and needs to be critically looked at. So the AI part of it itself is itself a facade that covers so many different types of things, although we get kind of rounds up to generative AI, and that itself is what we kind of take on in the book. And then it's a con insofar as, and I love this kind of theorization and thinking about what a con is.
Alex Hanna:A con is sort of like how you get away with something. It's a scam. The scam is like, it's it's it's kind of it's kinda like bullshit. You know? It it doesn't really matter, you know, like, what the truth value is of particular statements.
Alex Hanna:It's just like you're just trying to get away with something. So cons itself need to be kind of theorized, except we're seeing this con on a mass scale as so many venture capitalists, government agents, public institution leaders are really completely having the wool pulled over their eyes to buy into this bullshit.
Dylan Thomas Doyle:Could you say more about why you chose con as the metaphor here? It sounds like there are some synonyms that you could have used, or I know in both of your previous work and in your podcast, you talk a lot about hype, and you talk a lot about hype in this book as well. But you chose con. Could you say more about that?
Emily M. Bender:Hype is an important part of it, and we're certainly teaching people in the book how to identify hype and how to undermine it. But I think hype in itself doesn't quite have the insidiousness and sort of the intentionality of con. And certainly some of the people who are like pushing the hype and selling the tech seem to be true believers, but there's also plenty of people who are in there just cynically, you know, doing this because it's a way to make a quick buck. And so we sort of want to locate the responsibility with the people who are making these choices, and then empower our readers to sort of see through it and push back.
Alex Hanna:Yeah, I think con is a really mean, yeah, I mean, you have a set of con men, and they're particular And it's not like they're particular individuals who are I mean, there are certainly central individuals in the book. You know, the people that you'd expect, Sam Altman, Marc Andreessen, Elon Musk, who are, of course, con men that implicated in it. But their con itself is it's a the con the con man I mean, it's typically a man doing this. But the con man is typically a role in the ecology of of capital on who is doing this.
Emily M. Bender:And I think it's worth pointing out that there's something of a pyramid scheme going on too. So Alex mentioned before the people who are like, you know, the heads of public institutions who have the wool pulled over their eyes and then basically have to shell for this in order to, you know, not lose face, not lose, you know, just sense of having made good choices, I guess. And so some of the worst peddlers of the con are the ones who like were the victims first and then are sort of re pushing it. That cat.
Alex Hanna:I know, cat, sorry. She really loves to be on the table.
Calvin Liang:She loves the camera. I had a question. I kind of wanted to take a step back and understand who you both are as researchers and thinkers. Also noticed that joy and hope are interwoven into this book. Noticed in the first line of the preface, described this as a joyful collaboration between the two of you.
Calvin Liang:And so I'm just curious to hear a little bit more about how each of your unique perspectives blended together to form this book.
Emily M. Bender:Alex, you want start this time?
Alex Hanna:Sure. Yeah. So I mean, I'm a sociologist by training, and Emily and I met when I was at Google on Google's ethical AI team. I had been the first social scientist that was a research scientist at Google, so I was already kind of a weird person to be in that position, but I've been someone who's been focusing on the intersection of technology in society for fifteen years. Since I started my career, was looking at social media and social movements.
Alex Hanna:And so this itself was spurred from a few papers Emily and I had worked on together, focusing on data, specifically on how data is treated within machine learning and specifically looking at benchmark data sets. And so we had a pair of papers, data and the CISContents, and then, AI and the, I always mess up the title.
Emily M. Bender:AI and the Everything in the Whole Wide World Benchmark.
Alex Hanna:Thank you. I usually just say AI and the Everything Benchmark, but AI and the He just said it. The Whole Wide World Benchmark, or what we call the Grover paper for And so something that's been nice is that I think the thing that I've been really focused on is the political economy of AI, trying to understand AI as a technology and its intersection with society and organizations and what the larger context is around the technology.
Emily M. Bender:So I come in from linguistics. My degrees are all in linguistics. I teach in the linguistics department. I run a professional master's in computational linguistics. And in late twenty sixteen, a member of our advisory board said, Hey, you might want to get an ethics class into the curriculum.
Emily M. Bender:And I said, Great idea. Tried to find someone to come teach it, struck out, said, Okay, I guess I'm doing this. And so in 2017, I taught the first iteration of a class that has now become Societal Impacts of Language Technology. And social media was hugely important to me in sort of gathering the information I needed to sort of building an interdisciplinary set of contacts, set of people to follow. And Alex was one of those people we sort of initially met actually on Twitter.
Emily M. Bender:So I'm working on that. And then meanwhile, seeing all of this hype around language models taking over the field of computational linguistics slash NLP. And as a linguist saying, hang on, that doesn't make sense. These things don't understand the way people claim that they do. So on the one hand, at societal impacts of language technology in general, and on the other hand, sort of zeroing in on this lesson from linguistics.
Emily M. Bender:And so I think it was on the strength of that activity that I connected with Alex and then got invited to join the group that led to those two papers. And then the book actually comes out of our podcast. And you you asked Kelvin about joyful part of it. And one of the things that I think has been just extremely important and keeping going in this work is the back channeling group chats.
Calvin Liang:Because you
Emily M. Bender:get, you know, really awful pushback online. And usually it is not worthwhile, not a good choice to respond online, especially not as the target of whatever it was. But dropping it into the group chat and getting some support is amazing. And so it was in one of those group chats that I was reacting to some terrible talk about AI that was full of hype and sort of saying, how do we push back on something that starts off as video? And it was one of our colleagues actually, Doctor.
Emily M. Bender:Margaret Mitchell, who said, oh, well, you could do, you know, Mystery Science Theatre 3,000 treatment. And right? So that sort of goes into the background. And then we got to this long blog post by a Google VP, Blaise Aguirre Arcas, called Can Machines Learn How to Behave? And I'm like, this is too much to sort of do like a text based takedown of that.
Emily M. Bender:That thread would be way too long. You know, I guess we have to give this the MST3K treatment. Who's in? And Alex, who's actually a fan of that show, said, I'll do it.
Alex Hanna:Love it. Yeah.
Emily M. Bender:And that's where the podcast, which is actually an accidental podcast, it was just going be a live stream for a little while. And then we were sort of posting the videos afterwards and then decided we would have broader reach if we turned it into a podcast. But that's only like maybe around the eighth episode that we made that choice. So any new listeners, please forgive our sound quality at the start of the show. But you know, said so Joy, like, we are suffering through hype, but also it is ridiculous.
Emily M. Bender:And so we have and Alex coined this phrase, ridicule as praxis. And sort of, you know, so we'll do these episodes, and we've got another one coming in a couple of episodes, where we just take all of the terrible fresh AI hell that's come through and do rapid fire, just catharsis of laughing at all. And we'll get through like 20 to 30 in sixty minutes. And it's awful stuff, but it feels good to laugh at it.
Calvin Liang:Absolutely. Yeah. And I think from what I know about both of you, you're both so adept and qualified at calling out hype. I think that really comes together in the book. So I also am curious, so the perspective on coming conversation is that I just finished teaching an undergraduate course called Algorithms in Society.
Calvin Liang:And so really focused on thinking about how algorithms shape our individual decision making and behaviors and what does that mean at a broader level? So as I read your book, just imagine all the ways that it could be incredibly useful as a tool for learning for students. So, I'm just fascinated to dig into your brains about all of this. But my question for you is why write this book now? What about this current moment motivated the desire, the urge to write this book?
Calvin Liang:Because writing a book is not an easy undertaking. So I'm curious about if there's any urgency about now.
Alex Hanna:Yeah. I mean, it's immense amount of urgency. Mean, we're trying to get this we we posted about this book and people were like, We need this book yesterday. Because the hype is so And to take a phrase that Emily uses, and I learned from her, but thick on the ground. It is just everywhere.
Alex Hanna:And there's very little pushback, it seems, in whether it's the mainstream corporate news or even in other books. I mean, there's just very few The books that have come out, and in the episode that we have coming out, we did a review of this book by Reid Hoffman and a co author, Greg I think we said Rado. Yeah, Beato. And it's just a terrible book, just hype after hype after hype. So bad.
Alex Hanna:The subtitles is like, What could happen if AI goes right? And you're like, What? Then Mustafa Sullyman has his book, The Coming Wave. Alex Karp just published a book about AI and defense. Know, From the Grave, that ghoul, Henry Kissinger, has two books on AI hype.
Alex Hanna:There's just so much in the discourse about what AI can do for society without just very little pushback. There's an urgency to even have this counter narrative be out there, because the stuff I feel like is patently ridiculous, and so much of it is just being shoved down our throats. And so that's just, you know, that's just kind of the hype and the cultural moment. It's also the economic moment is really, I mean, the chickens are coming home to roost on it. I mean, bubble looks ripe for popping.
Alex Hanna:There's just been such a decline in investment. There's been multiple forecasters or, you know, venture capitalists have been like, this stuff is is not panning out. Really have to produce some money on this and none of these gains are being made. So, if markets can be quote unquote self correcting, but I mean markets are very much based on vibes. And if we can shift the vibes and harsh the vibes a little bit, then maybe that'll deflate a little quicker.
Emily M. Bender:So we finished the manuscript in September of last year, only one day past the contracted deadline. We were on it, and the book was basically written in 2024. And you know, as we were finishing it, I was saying to people, it will be a better world if by the time the book hits the shelves in May of twenty twenty five, it's not needed as much because the bubble has popped. I mean, I think that there's a lot in the book that will help with the next hype cycle, whatever it is. And I find the phrase actually hype cycle optimistic because it suggests that these things come to an end.
Emily M. Bender:But, you know, so it would be a better world if the market can do its thing and correct before our book hits the shelves. Unfortunately, that seems not to have happened.
Alex Hanna:Yeah.
Calvin Liang:Yeah, and a quick follow-up is that, I think what I've learned through teaching this class is that there's just kind of just a limited awareness of just how far reaching these algorithms are having an impact in our lives. I think you in the book do such a good job of touching on really crucial sectors that this AI hype can create some chaos in. I was thinking about the metaphor of Lake Michigan here in Chicago. It's just when you think that the lake is going to end, there's more lake. I think that applies here.
Calvin Liang:Just when you think the algorithms are doing their thing, there's more. But I think a key part of this awareness of this cycle is a concept you bring up in your book of Incertification. And I was hoping to just talk a little bit more about the connection here. Intuitification is incredibly All the students love this concept. And especially as you connected the process of Intuitification to the growth of ChatGPT I found really fascinating.
Calvin Liang:Students, I think, as people who use ChatGPT every day, I was really pushing to think about what happens when you are now fully reliant on this thing and they raise the price. I'm getting ahead of myself. So I guess my question is, could you talk a little bit about the connection between instantification and perhaps even provide a definition quickly to this AI hype cycle?
Alex Hanna:Initiative vacation is a term that Corey Doctorow coined a year or two ago. He had been talking about the ways in which free products or ostensibly free products, get people kind of hooked on. And then through basically the process of having to monetize them, they become worse and worse. And so the titular example is Google search, which, you know, was, you know, an improvement over old Lyco style directories and, you know, around Google's launch and, you know, twenty three years ago now. Don't quote me on that, but over twenty years ago now.
Alex Hanna:And then had been a pretty useful tool. Then through the process of just having ad placements and sponsored content. And now with Google's AI overviews has become completely unusable. And as Emily likes to say, the spillage in the information ecosystem has made just, like, so much slop. That means AI slop on top of AI slop.
Alex Hanna:And so the product is really unusable. And so students like your own have have said, well, maybe I can just ask TADGPT to do this, and and and and and there's there's attentive problems to that. So now there's this product, this is becoming unusable. Same thing kind of with Facebook. It starts as a way to connect between college students and now it's just, it's AI slot for days.
Alex Hanna:There's kind of proof that there's Facebook knows about it and they like aren't trying to introduce more slot into it. And so there's that concept of is that I would sort of argue that, you know, like what we say in the book is that these things are not useful from the jump. Like, they are born and they are born bad. And I think one of the arguments against it is kind of an argument, you know, like, there's an argument that we have from from first principles that this is this is pretty bad for x, y, and z reasons. If you're still using it, it's it's gonna get worse.
Alex Hanna:Like, these things are going to become more and more shitty. Because one of the things that I think we say in the book, and we almost forecast it, is like, there's not really clear paths to monetization. Like what are they gonna do when they start adding ads to this thing? And this is something that OpenAI has said, they're gonna like, maybe they're gonna put ads around this because that's like the only way that Google and Facebook have remained afloat. Like all the other paths to monetization have completely flopped, whether it's metaverse, Google has like tried to get into the cloud and there's still a distant third.
Alex Hanna:And so their product has to be ad sales.
Emily M. Bender:Yeah, so I agree that the incertification doesn't directly apply in the sense that there's not a good thing here that's getting incertified. It's bad from the jump. There's no actually good tech task fit. There's no cases where synthetic media, especially synthetic text, are what you want. I have a whole talk where I sort of try to figure out where and I'd like end up with basically nothing.
Emily M. Bender:There's nothing where synthetic text is the thing that you want. In addition, I think the Incivification metaphor works some if you think about the way that government services get Incivified because people say, oh, we can just throw in the synthetic text instead. The way that the workplace gets further Incivified because instead of actually doing creative work, you're tasked with babysitting the output of these machines and so on. So it's a sort of a useful concept, but not directly the way that you might think.
Dylan Thomas Doyle:So I've been writing this talk for research about AI and the afterlife. So, basically, chatbots that are used as grief bots to impersonate the a dead loved one. And I've been thinking as you've been talking and while I've been writing this talk about, well, what is hype versus what is reality? So a lot that seems baked in here, like, what the technology can do and then the stories that we tell about it. But I don't think I even know with how quickly some of this technology is evolving when it becomes hype, when it moves from reality to hype and what that looks like.
Dylan Thomas Doyle:And so I'm wondering for both of you if you have metrics of hype of or ideas of when it might cross that line? Or maybe this question is missing the point entirely, but I would love to hear what you think.
Emily M. Bender:It's a great question. I think that for me, comes down to a series of questions along the lines of, okay, what are we automating? Why are we automating that thing in particular? What's the input? What's the output?
Emily M. Bender:How are we interpreting the output? And how grounded is that in what the system is actually doing? I think that's where things are likely to fall apart a lot in the case of grief bots. For certain input output pairs, say, okay, well, is there actually any possible way that that output is computable from the input? So sometimes people will talk about being able to predict criminality from a picture of somebody's face.
Emily M. Bender:That's just a priori impossible. So when people are making claims about the technology that can't be grounded in good answers to those questions, it's hype.
Alex Hanna:Completely agree. I mean, I think that, you know, your alarm bells should probably go off when somebody is saying, well, it's like magic or it just produces it. Okay, well, what the hell is happening under the hood that makes it such that it could be doing this? And I mean, in that sense, it's helpful, I think, for readers of the book and students to understand, well, let's disentangle what this thing is actually doing. It's predicting the most likely sequence of words from a set of other words.
Alex Hanna:Is that magic? Well, no. It's statistics. It's some clever architecture choices in terms of this neural network this neural network methodology. That's not magic.
Alex Hanna:That is running very large compute cycles to train these models. Then have those models then make some kind of inference based on an input text token, right? Set of text tokens. So I think, yeah, I mean, from that way, if there's any kind of thing that fails the smell test, I think hype is happening there. A
Dylan Thomas Doyle:quick follow-up. Is AI different than previous technology? So they're similar You you could say that there was an Internet con or a social media con, but is there something about AI in particular? It seems to have captured the zeitgeist in a new way from my perspective. And I'm wondering from, for either of you, has it captured the zeitgeist in a new way or is it just a new form of previous cons or previous hype that has happened with technology over the past, say, fifty years?
Emily M. Bender:I think a few things are first of all, AI is actually as a con not new, right? So the term artificial intelligence was introduced in that Dartmouth grant proposal in 1956, and it has been a con and a sort of site of hype all along. We had some reprieve during the so called AI winter when it was not fashionable to talk about AI. So I think that con has been there. You know, having spent the 1990s in the SF Bay Area and lived through the .com bubble, I think, I mean, was and then the whole new economy thing with everything's open source and that's going to make money somehow.
Emily M. Bender:I think in those cases, generally speaking, the people being conned were investors more than the public at large. The public at large really lost out with the .com boom because of the damage done to brick and mortar stores and main streets and communities. So that feels a little bit different, right? So in the .com times, when we stick .com at the end of the company name, we're going make a lot of money. And now if we tell you it's AI, we're going to make a lot of money is the idea, but also, and therefore the technology is going to be magical.
Emily M. Bender:And that to me feels a little bit different.
Alex Hanna:Yeah, I mean, would say that the AI con, and we say this a little bit in the book, is more akin to, let's say, like the crypto or the blockchain hype cycles where crypto and blockchain are creating these things. And it's not as though maybe most people are being, you know, implicated in this insofar as most people aren't putting a bunch of their money into crypto and then losing it all. Although there's a lot of that happening if I mean, see, for instance, Trump's meme coin and and people just dumping a bunch of money into that. I think they're a bit different insofar as with regards to, let's say, social media and the internet, there's probably more retrievable in internet and social media. Social media is not an unalloyed good, but there's a lot of good things that can come from social media, but we've seen so many different externalities of social media that have really had negative effects, including surveillance and centralization and monetization and all those different things.
Alex Hanna:And perhaps there's a way to have a different kinds of social media. I mean, that's where the Fetaverse is promising, as well as the kind of different internet aspects. There's an essay, and this is metaphor, and I think it was in Noema Magazine about rewilding the internet, kind of like what would it mean to sort of have fewer of these platforms and more of these, you know, smaller websites in which kind of like minded communities could coordinate. I think it's different insofar as AI has as as whereas generative AI in particular has very few kind of applications. It would have to look a lot different from how it looks now to have those positive applications.
Alex Hanna:And so the whole structure of the political economy of it would be much different than it is now.
Calvin Liang:I kind of want to return to something you had mentioned at the beginning of this conversation around ridicule and introducing humor. And I had thought about this even this morning. Was scrolling through my feeds and I'm a big tennis fan. And so the Indian Wells tennis tournament just happened, just finished where Jack Draper and Mira and Dreva each won their respective singles titles. But this is relevant because a prominent sort of tennis news outlet tweeted out this photo of Jack Draper sort of in a congratulatory photo.
Calvin Liang:And it was just one of those images that just, like, reeks of AI generated. Right? And in this photo, like, Jack Draper looks so weird. Like, his face is just, like, all strange. And in the comments, everyone was sort of like, what are you doing?
Calvin Liang:Like, why are you using this, like, image? Like, don't you know Jack Draper is, like, an incredibly handsome, like, British man? Like, you could have used any normal image of him. Like, why are you using AI? And, really, I feel like ridicule is, like, really popping up in these comments.
Calvin Liang:And so I guess I'm curious to hear you talk a little bit more about the use of humor and how it can be strategically positioned to counter the AI hype.
Emily M. Bender:Yeah. So I think part of it is there's this meme out there that if you don't build the tech yourself, you're not qualified to comment on it. That is something that grates particularly on me as a linguist in the interdisciplinary field of computational linguistics and sort of having spent years and years and years sort of pushing back against the marginalization of what computer scientists sometimes called domain expertise in what Doctor. Tanigabro and others have talked about as the logic of domains. Don't think Tanig is the original source of that, but I
Calvin Liang:learned it from him.
Alex Hanna:Yeah, David Ribes. I've mentioned this essay quite a few. Like David Ribes and a few others talked about the logic of domains. Yeah.
Emily M. Bender:Yeah. And so from any other field, there is disciplinary expertise. And we might be interdisciplinary, we might bring different expertise. But computer scientists like to talk about, well, there's computer science and then there's the domains where we apply it. And the whole deal with machine learning is that domain expertise is expensive.
Emily M. Bender:So we're going to automate that. We're going to do it straight from the data. And so I come from this position of being told over and over again that my expertise is expensive, sure, but that doesn't mean it's valuable. It's to be replaced. And using humor to sort of point out just how little the people peddling this hype understand about the world feels like a powerful way to push back on that and to undercut it.
Dylan Thomas Doyle:I'm curious about hope. In your last chapter, it's titled Do You Believe in Hope After Hype? And you could talk about that in terms of the book, but I'm also wondering for each of you individually, do you believe in hope after hype? Where is the hope in all of this and this con that we're all living through? Alex, would you mind if we started with you?
Alex Hanna:Yeah, yeah. I mean, I'm finding hope in different types of places in which people are really controlling, you know, the means of computation and as well as controlling data. I think a lot of the examples that we use are people that want to see different elements of their technology that are controlled by communities. We talk about the Te Rei Maori language, sorry for mispronouncing that, and the technology developed by Teheku Media in doing things like machine translation that's controlled by the community and that gets trained on data that the community controls and maintains and whatnot. There's kind of similar different projects.
Alex Hanna:One thing that we're trying to facilitate at DARE something called the Hunihi Federation of African language startups. So people that are doing things with language technologies that are controlled by those people. And so those projects, I think, are really helpful because they point into different kinds of governance structures, into different types of technologies, which are controlled by people who find them, who need them the most, who are able to kind of envision what alternative realities of technology can look like. I think one of things about it, and this is kind of a point that Ruha Benjamin makes a lot, imagination is like imagination is something that we need to cultivate. We don't really have this kind of we're always on the back foot, it feels like.
Alex Hanna:We're trying to defend against kind of terrible technology and terrible oligarchs and terrible billionaires. But what can we do if we kind of imagine and make imagination this resource that we cultivate and nurture and flex and exercise. So I think that gives me a lot of hope in a time when there's not a lot of hope we need to maintain and we need to exercise that muscle.
Emily M. Bender:Yeah, I absolutely love Benjamin. Sorry, let me do a retake there. I absolutely love Ruha Benjamin's Imagination. It's a wonderful book, highly recommended. And I agree with what Alex was saying.
Emily M. Bender:Another thing that gives me hope is that I frequently have conversations with people who have deep expertise in some field of endeavor. It might be a journalist, it might be a lawyer, it might be a poet and where the conversation starts with them saying, you know, it looks like this can do my job. And I say, well, no, it absolutely cannot do your job because the form of a newspaper article is not the same thing as the process of creating a newspaper article and therefore it is not a newspaper article. Same with poetry, same with sermons, same with legal contracts, all these things. And the sort of pushback is to take pride in our own expertise and to take pride in our own skills and our own lived experience and what we do with that.
Emily M. Bender:And that pride is hopeful and that the antidote to a lot of this hype is sort of standing with our expertise and our relationships and our humanity. It just feels like a really great thing to push off from and into, right? So off of the hype and into these very human things. I find that very helpful.
Calvin Liang:I have another question. Again, I keep on talking about this class. It's so friend of mine. My students are writing their final papers as we speak. But one of the assignments I created was an algorithmic cleanse inspired by Kyle Chaika's book on Filter World.
Calvin Liang:And so I challenged students for twenty four hours to not encounter any recommendation algorithm. And no For You pages, no Spotify generated playlist. And I was sort of struck by how many students would they really, really believed in their ability to do this. But as soon as they encountered sort of one thing that broke their cleanse, they were just sort of like, okay, it's over. I give up now.
Calvin Liang:And so that lasted if they would look something up on Google images that would ruin it. I guess that speaks to me to the idea of idea of being trapped in this cycle of being so reliant on algorithms, so reliant on AI to get through everyday life. And I'm curious what you would say to someone who feels like they can't break out of the hype cycle. And maybe we'll start with Emily.
Emily M. Bender:Okay. So I think the starting point is, first of all, it helps to disaggregate AI into what are all these different things, right? So recommendation systems are one. Spell check is another. I don't consider it AI, but it gets called that these days.
Emily M. Bender:Automatic translation systems, automatic transcription systems. You know, might be applying for a job and have your interview video processed by subsystem. So basically disaggregating all of that and realizing that we also have agency about saying yes or no to some of it, right? I'm going to continue using spell check. I don't spend a whole lot of time on For You pages, but like sometimes things get served up to me that I enjoy.
Emily M. Bender:I don't mind the fact that Netflix has a recommendation algorithm behind it And sort of seeing it as, okay, I am using various different kinds of tech when it's my choice, how do I decide? And then when it is being used on me, how do I reclaim agency? So for example, following the leadership of the Algorithmic Justice League and Doctor. Joy Belongweeney, I refuse the photographic surveillance in airports. And I do that not because I personally feel like I'm in that much danger, although that could be changing, but to sort of help normalize the refusal.
Emily M. Bender:And so that's an intentional thing that I do, and it's a kind of reclaiming of agency, and it's not all or nothing, right? Those individual acts matter.
Alex Hanna:Yeah, and I mean, I think individual acts are important, and I think resistance is important. And I mean, I think there's a lot of tech resignation. You know, I mean, it's sort of like, well, privacy people tend to call this bankruptcy. They're like, well, they have everything about this right now. Like, what am I going to do about it?
Alex Hanna:And I think that's important to push against. I mean, I think cultivating, and I think it's doing it in a way that's not individual because that can be very lonely, but what are ways of collective kind of pushback? What are ways of thinking about mass resistance in different ways? And that could be having, you know, talking about it within unions, talking about it within group chats, talking about it within friend groups. I mean, what are ways that you can collectively resist these things, right?
Alex Hanna:And I mean, one of the things, I mean, I'm actually pretty surprised that like this Tesla takedown set of protests has become a thing because I think that's actually pretty amazing that, I mean, the individual act of saying, I'm not going to buy a Tesla is probably not going to go so far, but people are going to dealerships and they are protesting or, you know, committing acts of, of, of vandalism or destruction. And I mean, I don't condone that. I'm going to say this on this podcast if you're listening and that becomes illegal someday. But I mean, it's pretty impressive that those are collective acts that are that are occurring. And I mean, it it does show that and we're seeing a material material impact on Tesla's stock price.
Alex Hanna:So, I mean, those collective acts are very necessary and those things that we need to cultivate right now.
Dylan Thomas Doyle:As we move towards closing, I'm going to ask you both the big last question that we ask everyone, which is, can you describe what a world of joyful technology looks like to you? Alex, would you mind if we started with you?
Alex Hanna:Yeah, I mean, it's, I've been trying to think about this so much and I think what it would mean is a world of joyful technology is technology that facilitates human flourishing and development. I think human flourishing is this term that some of the existential risks people have taken. And I want to claw it back because the first place I encountered it was was via a course speaking about the the helpfulness of education, of course, the late Eric Holden Wright and his envisioning really Utopia's book. And in which he talks about like what would it mean to have kind of a socio economic arrangements, and which human flourishing could thrive and I think technology in that sense would probably be one in which it helps people maintain their autonomy. It helps people, especially the people who are most marginalized, lead to self actualization.
Alex Hanna:It is not centralized to a set of profit driven technology companies. It is people have control of the means of computation, the rights to the data, the rights to their personality and likenesses. And it's used to extend what it means. And I think we have these kinds of experiments in different interstices. I mean, that's to Hecco Media or certain labs that control the servers and facilitate communication.
Alex Hanna:And I mean, that's really what I think joyful technology would mean. But I think to get there, also have to change the sociopolitical and economic structures of what motivates tech development right now.
Dylan Thomas Doyle:Well, that's what you're working on, right? Just let us know when you
Calvin Liang:fix that.
Dylan Thomas Doyle:Hopefully his book is a step towards it. Emily, what about you? What does a world of joyful technology look like?
Emily M. Bender:So I appreciate getting to go second here so I can sort of yes and what Alex said. And to add that to get there to all tech has that property requires changing the sociopolitical system, but that doesn't prevent us from building some tech that has that property. And I think that a couple further details I would add is that joyful tech is consentful tech. And I want to point to the Consentful Tech project at consentfultech. Io where they say that you know consent is freely given reversible informed specific and enthusiastic and can you imagine if everybody impacted by the creation and use of a technology were doing so consentfully, we would be, I think, in a very different place.
Emily M. Bender:And as Alex says, it's going take a lot to get to where that's true for all tech. But when we are building further tech, we can keep those kinds of principles in mind. And then I think the second thing that makes tech joyful is that when it is embedded in and supportive of relationships. So technology that helps people relate, technologies that helps bring communities together, technologies that are not based on relationships of exploitation of workers and data and the environment are really important. There I just want to shout out the All My Relations podcast, which I think is a wonderful place to go.
Emily M. Bender:It's not a tech podcast, but it's a wonderful place to go to really think about relationships.
Dylan Thomas Doyle:Well, as we close for listeners, make sure to buy The AICon, How to Fight Big Text Hype and Create the Future We Want by Alex Hanna and Emily M. Bender. It will be out by the time that this episode is out, so just go look it up. Maybe don't use the Google AI search to get it, but is there a place actually that folks can go that you recommend them going in order to buy it?
Emily M. Bender:So start at thecon.ai. Alex and her brilliant Samantha should get that URL for us. We have links to sellers, including bookshop.org, which is a great way to sell to support local booksellers. And you know, you can request it from your library too.
Dylan Thomas Doyle:Wonderful. Let's support our local libraries. Alex and Emily, thank you so much for joining us, and Calvin, thank you so much for co hosting.
Emily M. Bender:Thank you all very much. Thank you.
Calvin Liang:You.
Dylan Thomas Doyle:And that's all the time we have. Thanks again to Calvin and Emily and Alex for joining us today on the AITalkercy podcast. Again, this is our first episode. We're gonna be going weekly, going forward as part of our summer speaker series, the triple s, as I'm calling it in the back of my head. We have some amazing, amazing guests coming up that I'm very excited to announce soon.
Dylan Thomas Doyle:But for now, please follow us on all of the different socials, the Instagrams, the TikToks. I'm so old. The LinkedIn, if you feel so called to be in that community. But, really, we're trying to bring people together, and we're trying to talk about the real stories around AI right now in the world. Welcome again.
Dylan Thomas Doyle:Thanks so much for joining us.