Join us, feminist coaches Taina Brown and Becky Mollenkamp, for casual (and often deep) conversations about business, current events, politics, pop culture, and more. We’re not perfect activists or allies! These are our real-time, messy feminist perspectives on the world around us.
This podcast is for you if you find yourself asking questions like:
• Why is feminism important today?
• What is intersectional feminism?
• Can capitalism be ethical?
• What does liberation mean?
• Equity vs. equality — what's the difference and why does it matter?
• What does a Trump victory mean for my life?
• What is mutual aid?
• How do we engage in collective action?
• Can I find safety in community?
• What's a feminist approach to ... ?
• What's the feminist perspective on ...?
Becky Mollenkamp: This is day after election. We just did our live episode for anyone who just listened to the last episode. If you're watching this later or listening to it later, which you will be, that's what just happened. we're, this is like, we're, we're real raw today. But we did the live episode about the election. We didn't want to just talk about it again, because don't know, you feel the same way, but I'm also just like, I'm in pain. Like I'm just…
Taina Brown: Very raw. Yeah I'm done talking about it, you know.
Becky Mollenkamp: So we're gonna talk about something else. You'll be listening to this on like the Monday after the election. I am sure pretty reasonably confident that still probably the only thing on most of our brains will be this catastrophic election. But here we are, let's distract ourselves with something else and we're gonna talk about AI today. And so I think we'll do, if this sounds good, Taina, is we have an episode that we use Notebook LM, which is Google's new tool that you can create an audio discussion using any information you feed it. It will take that information and create like an audio discussion about it. So I've had it a few episodes of our show and said, like, create a discussion about messy liberation podcast. And so the episode it created is six minutes and 45 seconds. If you only want to listen to part of it or if you want to skip the whole thing and just hear our discussion about it skip ahead in six, seven minutes, and you'll come back to us talking about it. But we wanted to give you an example of what this new technology sounds like and how it could be used and how scary it can be. So, OK, I'm going to splice this in now, magic of technology, and you can listen and we'll come back and talk.
Becky Mollenkamp: Okay, we're back. That was the messy liberation conversation. And just so everyone knows, those two voices you heard are not people.
Taina Brown: They're AI generated.
Becky Mollenkamp: Correct. So it's the same two voices that are used for everything that that tool right now does. I am certain that eventually there will be a range of voices you can choose from. But right now it's just what sounds like probably a generic white guy and a generic white lady.
Taina Brown: Yeah, Siri and what's the one that was it? IBM came out with a few, I forgot what they called it.
Becky Mollenkamp: Do they have a man? Probably didn't last because, you know, we like to have women be subservient and telling us and like helping us. So of course, it's women's voices that are used.
Taina Brown: So what before this, before generating this discussion, because I'm assuming you're calling it that because you don't want to call it a podcast episode. I'm on board with that. Or I guess just in general, what are your thoughts and what do you know about AI, Becky?
Becky Mollenkamp: What do I know about AI? I know that it makes my brain hurt because I don't really, like, I understand. So why it's called LM is language model, right? So I understand that AI is a language modeling system, right? Like, so it uses available data and it feeds all of that data into these computers that then are able to take that data at a rapid scary clip and turn it into something, like to almost, to create something new out of that. But it can't do, it can't create whole cloth because it's not human. So it has to have information to create something. I am sure that anyone listening to this is gonna be like, who knows about AI? Like who actually understands AI is like rolling their eyes hard. Like, my God, she does understand it. But I don't really understand it. I just know that it scares the crap out of me. And it's here to stay.
Taina Brown: Yeah, it's not going anywhere. I don't know much about AI either, other than the fact that what it gives us is what we first give it. And then it takes, like you said, it takes that and it generates something quote unquote new for us. But it's really not new. It's just the same thing repackaged in a different way and maybe an evolution of an idea or a thought have read a little bit of Sophia Noble's book, Algorithms of Oppression, that came out some years ago. She's spoken and researched and writes about the intersection of technology and systemic oppression. She's out of UCLA, I believe. And she actually used to work in technology before she went back to get her PhD. So she's kind of an outlier in terms of researchers who are talking about AI in that way. And honestly, the first time I listened to that discussion that you sent me on Slack, I was kind of freaked out. First of all, it feels really surreal to hear these fake voices that are probably a conglomeration of a bunch of different voices that have been recorded, right?
Becky Mollenkamp: Yes, because we know Scarlett Johansson sued about AI using her voice. A voice that sounded eerily similar.
Taina Brown: Pretty similar to hers, yeah. So it felt really surreal, but then it was also interesting because the discussion, the way it's paced feels very much like a computer giving me information. It feels like every part of the conversation, like the back and forth, is in response to a question. Like, it doesn't feel organic. It doesn't feel like a natural progression of a conversation. And I could definitely tell how it was quite literally regurgitating information from the episodes. Like, it was saying things like verbatim from the episodes. It was just a different intonation. Or if you fed it a question, it said it as an answer. You know what I mean? And so like, so it just, it felt like a clone of an episode. It didn't feel original, if that makes sense.
Becky Mollenkamp: It does. And to be fair, I don't know that I challenged it to do something very original or creative, which it'd be interesting to try that and see what happens if you do, say, create something new using this as a guide. But let me be clear. What Google right now, anyway, says Notebook LM is for is not to create podcast episodes and to create original thoughts. The idea behind it is really is more of a study guide. And I can see the benefit of it. Basically, it's like you could say, take this document or this study guide from my teacher or if you have, I suppose, a PDF maybe of a book. There's only certain types of documents that can upload and read. if you give it enough information, you could say, take all of this stuff from my class that we're learning and create a 10 minute conversation that talks me through all of the salient points. In the same way that there are podcasts out there that do a similar thing for books. Read the book and they give you basically the Cliffs Notes and I can't remember what they're called in Canada, but the Canadian version of Cliffs Notes. I only recently learned that they don't have Cliffs Notes. They have something that's similar. But anyway, it gives you that version of something. And I think for to talk about accessibility for a second, I do think that there's something really beautiful about that potential, the potential of it for that for people who do learn differently, for auditory processors, for people who can get overwhelmed by all of the different sorts of information and do really well when it's presented in a certain way, because I could see if you listen to the episode, can't you hear how it would be really good at talking to you about a bunch of different articles that it read or something, right? So I think for what its intention is, it's great. And of course, this also though gives us, and I think what we're thinking about is the glimpse of what is next. So yeah, that it's not intended for podcasts, but if this technology begins to exist where we can create entire conversations, whole cloth from just these AI generated voices, that feels scary. And I just saw a statistic that there's already 800 or 1800, and this was last week. So I'm sure it's doubled in size by now. 1800 AI generated podcast episodes out there. So it is being used for that. It is coming. And I guess the ethics of it are a little scary because, right now, anyone who uses this tool is going to hear those same two voices. You might get real familiar with them soon, right? But what happens when that begins to shift and there's endless voices and you don't like what are the ethics around that of are people are the platforms going to enforce and how do they know how can they enforce like some sort of warning this episode's created by AI? It's a little scary about what it could do.
Taina Brown: I didn't know that there was already AI generated podcast episodes out there. I think the ethics of it is really, you know, that's where a lot of the tension is, obviously. I know with like, you use, there's kind of like an honor system right now if you use AI to generate images, right? Like the expectation is that you will let people know that it's an AI generated image and not like an original work like something that, you know, someone came up with on their own.
Becky Mollenkamp: YouTube is the place I've noticed it where they actually have, you're supposed to select and say no or yes. Exactly, it's the honor system.
Taina Brown: Not everybody does that. So, I'm wondering, and I don't know the answer to this. I don't know if you do, but I'm wondering if the people who are working on AI are also working on an algorithm that can detect if something is AI generated or not
Becky Mollenkamp: There's people trying to figure that out, but they're coming. I feel like this is what always happens with technology is the people who are trying to figure out how to fight the nefarious uses of technology are always a step behind.
Taina Brown: The work to create checks and balances is seen as a hindrance, right? It's seen as an obstacle. So it might not get as much funding. It might not get as much attention. It might not get as much visibility, et cetera. I think my other concern with AI also is that obviously there's a concern of like, it's only as good as the information we give it. So we live in a fucking hellscape. So the information we're giving it is not the best, but also like, and this is something that I've seen people try to like shout out from the rooftops and it usually gets silenced for the sake of like progression and innovation, like the effect of AI technology on climate change. And I don't think that we as a human species are at a point to really build momentum on AI technology without addressing some of these climate change issues. Because apparently, I don't even know what the number is, but it's a ridiculous amount of water that it takes just to generate just a small amount of AI technology. And I see you're looking it up, so thank you for doing that. But yeah, and so I think it's harmful not just in the ways that we immediately think of it, but it's also harmful in so many other ways.
Becky Mollenkamp: AI's projected water usage could hit 6.6 billion meter, I don't know what M3 is, by 2027. But basically, it's like Google's data centers in 2023 consumed 5.56 billion gallons of water. And that is like we're barely scratching the serpents.
Taina Brown: Wait, and where is Google's data center? Don't tell me it's in fucking California. That's been in a drought forever.
Becky Mollenkamp: I don't know if they are actually pulling it from there. I don't know. I don't know if the data center is located where they are. My guess is it's not. It's probably located in a foreign country with far less resources that Americans are consuming, because we love to do that. I mean, in the same way that the more you learn about how lithium batteries are mined and the disgusting human mistreatments involved in that, it's tough. And I hear you on that. And I feel scared because much like lithium batteries, there's only so much you can do as an individual when everything moves towards that, right? So all of the technology that we need to sort of function in the society, if you're going to have a business, you're going to be an employee, you're going to, you know, be able to get ahold of your kids, whatever, like, require lithium batteries. It's hard to opt out. And AI is right now, you can opt out. But it seems pretty clear the way tech is moving. It will soon become the similar thing where there's not it's not easy to opt out. It is not as simple as I just don't use AI because it's getting integrated into everything.
Taina Brown: Yeah, it's like smartphones. You cannot, and Wi-Fi, you really can't exist without a smartphone or a Wi-Fi. So I looked up Google data centers, and they're all over the world. There's this long list. Most of them are in the US. I'm not seeing anything in California, but there's definitely some in Nevada.
Becky Mollenkamp: Which is also in drought.
Taina Brown: I'm also wondering is that statistic that you found per data center or is that total? Because if that's per data center.
Becky Mollenkamp: It sounds like it's total. It said Google's data centers consumed. Every time an AI tool is prompted, it uses 16 ounces of water. So asking 20 to 50 questions on chat GPT is the equivalent of using half a liter of water.
Taina Brown: And I know I've used ChatGPT in my business. I don't use it often, but I have used it for brainstorming and ideation.
Becky Mollenkamp: I use it for writing the show notes for the show. Because if I don't, the manpower that that takes makes this become untenable. And these are the challenges that I think we continue to face. Yeah, it's tough. I mean, we get into a place where this, think just the same way where you could say, I don't use a cell phone. I don't have a computer. Cool. Good luck making a living. And I'm afraid AI is going to the same place because we have such expectations around productivity that you won't be able to compete and stay employable or in the case of running a business, you know, profitable. And that is scary. It is really scary. And I also like, I think that I think the climate change piece is, huge. So I'm glad you're shining a light on that. Also just to some of the other reasons about ethics, forget about just the clarity on who's creating it and what, you know, is this AI, is this not, is it pulling information from like Russia or what? But also, like it's been found, and I think the book that you mentioned is one that talks about this, if I'm not mistaken, that it's just inherently racist, inherently sexist, because it's created using information inside of our systems that are inherently sexist and racist. And that's also really problematic as it gets more traction and more use.
Taina Brown: I think the question obviously is how do we move the needle forward in technology and AI innovation that's not to our detriment, that isn't going to speed up the clock on a climate catastrophe, that isn't going to continue to harm folks from the global majority who are continuously living in marginalized system or systems that marginalized them. And I don't know that I have an answer for that. Well, I don't know enough about technology to like really know how to deal with the technical side of it. I know enough about people. Like I think when it comes to creating AI solutions, I feel like there's this thing that happens in the workplace and in life in general where it's like, sure, he's an asshole, but he's really good at his job. So, you know, we just like let him get away with stuff or we just look the other way. And it's like, we can't afford to do that with, as we're building structures and technologies that are going to have such a heavy impact in our day to day. Like we can't afford to look the other way when people are not at a place where they can value human life and human dignity and are not at a place where they can do as much as they should be doing to be unbiased. Because no one's ever going to get it 100 % perfect. I'm not going to get it 100 % perfect. I do messy shit all the time. I do a lot of messy shit on this podcast. So when I say a lot of messy shit on this podcast, so like no one's ever going to be like 100 % perfect, but I think we haven't figured out how to be human yet. So how the fuck are we moving towards figuring out technology that has human capabilities? Like we don't even know how to be human to each other.
Becky Mollenkamp: It's scary. Well, and speaking of messy, like, I love using chat GPT. Like, it's fun. It's it's helpful. It's interesting. It can give you a lot of like new perspectives. It makes things easier. But sometimes it's actually just kind of fun. And it's like, and it's just it's novel. Like, my God, I'm doing this. And it's immediately giving me this answer. And it's like you taking information I gave it giving it to me in a new way. And like, like there's a lot about it that's super freaking cool. OK, like I'm not going to lie. And that scares me even more because it feels like social media in that.
Taina Brown: It feels like a game.
Becky Mollenkamp: Yeah, I am worried about the attention economy piece of that and the possibilities for that and the ways that it's going to be like when you think about how technology or social media and all that's gone there and what that's done to our attention span to how our attention is being monetized. Chet GPT feels like it's gonna take that and amp it up by a million because now it's like your friend. Now it's made this on this weird parasocial, like I'm scared. Like the movie Her, which I never actually saw, but I know the premise of, right? This guy who falls in love with is like Siri. That shit scares me. We already have enough young white men who are lonely, who are angry because they're lonely, who are seeking comfort and solace and solidarity online with other young white men who feel the same way. And this election showed some of what that means, right? And who those people are, those Joe Rogan listeners. And now, you give them AI that can function as a girlfriend, right? That will tell them what they want to hear. That will be that perfect woman, quote unquote, for them. Like, I'm terrified of what that could start to look like and do to some of these young men's minds and then how they show up and interact with the world outside of this world that gets created for them. That is perfect, because what AI can do for you is create a world for you of friendships and relationships where everything is exactly how you want it to be. And you don't have to learn how to deal with human, like you said, we don't know how to be human yet. They don't have to even learn how to have human conflict.
Taina Brown: Human connections. Yeah, human conflict. I agree. I like using chat GPT too. And I think this conversation is really making me think about how I use chat GPT and being more intentional in terms of when I do give it a prompt, really coming to it beforehand with the very specific question that I want to ask it so I don't have to go through multiple prompts to get it the information that I'm looking for. I mean, there's stories out there about people befriending AI. Like there's a kid who in Florida, I was just looking up the story. This was earlier this year, where in Florida was this? Tallahassee, a 14 year old took his life because he, quote unquote, befriended, chat JBT, and was like, I want to come home to you. And the bot said, yeah, come on, come home. Come home, I love you. so it's this, god, it's just, we. It's just that I'm sorry that the reality of like that story, like I'd heard about it, but I never said anything about it out loud. And it's just like really hitting me now. And I've noticed as I've used chat GPT that there is some confirmation bias that happens, right? Like how you ask the question, the language that you use to ask the questions, the words that you use, like it will just spit out what you want to hear based on the prompt that you give it, like you said. I'm just feeling devastated and overwhelmed.
Becky Mollenkamp: It's not designed to challenge you. It's not designed to, you know, make you think in new ways. It's designed to do what you want it to do. And that sounds really cool and can be in a lot of ways. And when you extrapolate out what that can mean, it's terrifying. And thinking about children getting more and more access to AI because it's going to be showing up on their systems that they use, all their devices. It's going to be showing up in the games that they use.
Taina Brown: It's on the phone now.
Becky Mollenkamp: It's going to show up inside the games that they use. it's going to it's it's becoming this thing that I think is soon going to be everywhere. And that is really, really scary as a mom of a young child who loves watching YouTube videos and all, you know, playing Roblox and I already have enough challenges in managing all of those things. Adding this stuff scares me because I can see how he gets already emotionally invested inside of a game playing an NPC, a non-playable character, or like not a person. is seeing those things as almost real. And then to add in this, like when you add in AI being able to make those characters those non-human characters inside of games and things feel even more human and respond in real time and engage in ways that are designed to capture attention longer and all of that, that scares me so much. And when you talk about like, how's your rethinking using chat GPT or how, I feel that and I think that's valid. And unfortunately, I think it comes back to as it does with everything individual changes are important and how much do they matter when the likes of Google and IBM and Apple and Disney and every major company is going all in on AI. Their usage and their required usage for employees and all of that and where this is headed and the ways they're putting it into everything that you can no longer even avoid. That's the real problem, not your choice to use, you know, two less search terms, great. I think we all need to explore the ways we interact with everything and how we are participating in the systems. And ultimately, we need the systems that are meant to protect us to do their damn job and say, hey, corporations, this is going to kill the planet. It's going to kill kids. It's going to destroy our abilities to be human. They're not going to do that because there's money. And, and I don't want to bring it back to the election too much because I know it's depressing, but we just put into office a whole slate of people who have zero concern about any of those things and care only about money. And the reason they're often in office is because of the likes of Elon Musk, who you know, and Zuckerberg, all of them, who you know are looking at AI as a cash fucking cow.
Taina Brown: Yeah, absolutely.
Becky Mollenkamp: And not to make it more depressing, but that makes my fear. Like this election, any thoughts I would have had about AI a week ago and when I thought maybe, yeah, it's fueled by this deep existential dread I have.
Taina Brown: And I think that, you know, to your point about, you know, companies acting ethically and being regulated, right? Because one thing we know is that when Republicans are in office and have control of, you know, the Senate and Congress and the Supreme Court now, all across the board, all of it, they're big on deregulation. And so that's a scary thing to consider. think AI is obviously not going anywhere. I don't think it was going anywhere no matter who was in office. It's obviously a scarier concept now, I think, for a lot of us. I want to be clear. I don't think what Becky and I are saying is that AI shouldn't exist improvements and innovations and technology are necessary in the world that we live in. However, there does need to be regulation. There does need to be checks and balances. And we have to be super intentional with how we engage with these tools, right? Like don't just sign up and use it because it's like the cool thing to do. Like really understand what you're getting yourself into when you start engaging with this. Understand how the technology works so that if you do have children, you can regulate what happens in your household. Because I'm pretty confident that the government sure as hell is not going to be regulating what happens in the labs that are creating this AI or these tools.
Becky Mollenkamp: I’m interested to see what Europe does or doesn't do around AI because they have been better around climate, around attention to economy issues, privacy. A lot of the things that this touches on, they have done better, not perfect, but better than America. And I used to think we were like slowly but surely following suit, but I don't know anymore. And I worry that Europe may not be either because what our changes here politically mean for the global economy and the global political landscape. But I am interested to see if they approach things differently with some more regulations that might give us some example of what it could look like. Because I haven't, I mean, as much as I understand and have heard that it's not great for climate, I haven't heard and maybe you have or maybe and it very well could be other and I just haven't done enough reading about it. I should ask Chat GPT, just kidding. But I don't know what climate scientists propose when it comes to AI. Because the idea, like you said, of it going away, whether I think it should or shouldn't, doesn't even matter. It's not going to. You can't put the toothpaste back into the tube, right? This technology exists. It's exciting to people. It does solve for some problems. It's not going to go away. There's never been technology other than like, you know, the Blu-ray disc, there's never been, or what was the laser disc? There's never been technology advancements that we regress on, right? Technology only continues to advance. So the idea that's gonna go away is silly. Like we can't propose that, but right. But how do we, so then how do we live with it, right? How do we find a way forward that can be more ethical and more mindful of this precious planet of ours? And I would love to hear more from the folks who are thinking about those issues on what they're proposing or if they have thoughts yet. Like it's still so new, I don't know. But I need to do more reading and see if I can find that. I don't know if that book that you mentioned talked about that or if it was more just laying out the problems.
Taina Brown: I'm actually not sure. Because it came out not super long ago, but it was actually, yeah, in technology terms, it's ancient. Yeah, I think it came out before chat GBT was a thing. So before it was a thing the way that it is now. let's see. Algorithms of appreciation. What year was this Sophia Noble? 2018. So that was six years ago.
Becky Mollenkamp: We'll link to it in the show notes too. So ChatGPT when you read this transcript, there you go. And even transcripts, by the way, are being created by AI. This podcast, the technology that we use inside of the tool to edit is AI.
Taina Brown: Yeah, all the YouTube shorts and all of that.
Becky Mollenkamp: I mean, and not to not do that would take what takes me two hours would take that and more than quadruple it. mean, what do you do? I don't know. I don't know the answer. So this was not an episode of giving solutions. It's more of an episode of the messy part of, yeah, of like just thinking about it. But I do think what I like is what you said about what can we do? don't know the answers. We don't know any of it. But the thing that we can always do with any of this stuff is try to be more mindful and intentional. So even if you don't know, and even if it's not the right answer or the perfect answer or, you know, an answer that's going to make the biggest difference, I think it's always important to at least be aware, right? So starting to bring mindfulness, awareness to how you're interacting with everything in the world, everything you're using. So it's not just AI, but all the things to just be more curious. How is this problematic? In what ways might it be? In what ways am I using it that I maybe don't need to, that I could change? And then to try and do that, be more intentional and then say, what is my relationship with this tool, this system, this thing that I can't escape? I mean, it's the same way we do with capitalism. I can't escape capitalism. So can I be aware of where that capitalist condition is showing up? And then how can I be more intentional about how I can make changes where it makes sense? And I think the same thing applies here until we can collectively work to install a government that actually gives two shits about the environment and mental health and all of the things.
Taina Brown: And I think I would also add just like, how can you be intentional with your usage of AI to help your community, right? To help like the people who need it the most. And so like these tools are available. So why not use them to, why not subvert them in that way, you know?
Becky Mollenkamp: We're all going to get through this, y'all, somehow. So if we sound like we're a bit nihilistic today or down in the dumps, there's good reason.
Taina Brown: It's November 6, 2024, that's why.
Becky Mollenkamp: You're probably feeling the same way if you're watching this anytime in the next four to eight years. So we're listening to it. Anyway, well, thank you for humoring us and listening to this and sorry if it was too depressing. But with all things like we just have to slowly figure it out. So thanks for chatting about it with me, Tanya. and messy liberation at gmail.com. Love to hear what you think of the episode. If you listen to the AI generated episode inside of this earlier.
Taina Brown: And if you know anything about AI at all, please reach out to us and let us know how this thing works, what we can be doing, how we should be using it, what are some things we should be considering. We'd love to hear from you.
Becky Mollenkamp: Yeah, we'd to have a conversation with you on the podcast about it, I think. So all right. Thanks, everyone.
Taina Brown: Bye.