Technology and Security

In this episode, Dr Miah Hammond-Errey is joined by Professor Emily M. Bender—A renowned AI commentator, professor of linguistics at the University of Washington and co-author of The AI Con: How to Fight Big Tech Hype and Create the Future We Want. In this episode we explore the complex relationship between language, large language models, and the rise of “synthetic text-extruding machines.” Bender discusses the origins of the “stochastic parrots” metaphor, the risks of anthropomorphising generative AI, and what’s really at stake as automated systems permeate journalism, leadership, and collective decision making.
 
The conversation outlines some of the social and democratic impacts of synthetic content, including on democratic discourse and journalism, the dangers of language standardisation, and how emerging tools can erode diversity and self-confidence in language users. Emily Bender offers practical advice for policymakers and leaders, emphasizing transparency, recourse, and data minimisation. She offers observations from her book tour, reflecting on the ongoing need for human connection in a digital era, and outlines the importance of workers’ collective rights in navigating the future of automation.

What is Technology and Security?

Technology and Security (TS) explores the intersections of emerging technologies and security. It is hosted by Dr Miah Hammond-Errey. Each month, experts in technology and security join Miah to discuss pressing issues, policy debates, international developments, and share leadership and career advice. https://miahhe.com/about-ts | https://stratfutures.com

Technology & Security Episode Transcript
Welcome to Technology and Security. TS is a podcast exploring the intersections of emerging technologies and national security. I'm your host, Dr Miah Hammond-Errey.

Dr Miah Hammond-Errey: My guest today is Professor Emily Bender. She's a professor of linguistics at the University of Washington, where she directs its computational linguistics laboratory. She was included in the inaugural time 100 list of the most influential people in AI. Her latest book, co-authored with Alex Hanna, is called The AI Con How to Fight Big Tech Hype and Create the Future We Want. Thank you so much for joining me in person in Sydney and on the podcast, Emily.

Dr Emily M Bender: I'm so thrilled to be here.

Dr Miah Hammond-Errey: She has extensive experience in natural language processing and computational Linguistics and regularly advisors, policy makers, from local officials to national governments to the United Nations.

Dr Miah Hammond-Errey: We're coming to you today from the lands of the Gadigal people. I pay my respects to elders past, present and emerging and acknowledge their continuing connection to land, sea and community.

Dr Miah Hammond-Errey: So, Emily, as a linguist, what does language mean to you?

Dr Emily M Bender: I mean, as as a linguist, language is the thing that always draws my attention. Of course, it's lovely to be traveling and hearing different accents. but language is also, you know, a very large part of how we connect with each other. Um, in, you know, face to face, like we're getting to do today, but also at a distance of both time and space. And it is intensely human and intensely social and also very processable with computers and that's turning out to be quite a weakness actually.

Dr Miah Hammond-Errey: why is language so important in the context of large language models or as you've called them, synthetic text, extruding machines.

Dr Emily M Bender: Synthetic text, extruding machines is an attempt to make it sound a little unpleasant, because I want people to sort of approach large language models with caution. And the issue is that large language models aren't actually modeling language. They are modeling the distribution of word forms in text. They are modeling what language looks like rather than what it really is, what we use it to do, and what it means to us.

Dr Miah Hammond-Errey: you co-wrote an article, um, in 2021 that has infamy in the research community. It's called on the dangers of stochastic parrots. Can language models be too big? I'll put a link in the show notes. The whole paper is great, but I want to start this interview with your explanation of the meaning of the term stochastic parrots for a general audience.

Dr Emily M Bender: Yeah. Okay. So stochastic parrots was a phrase where we were attempting to make vivid what it is that a large language model can do. And keep in mind, this was published in 2021 and written in 2020. And so at that point, there was this race to making ever larger language models. But they were not, for the most part, being used to synthesize text. This was sort of a fringe use case, but there were already some people getting excited about that, and it was very concerning. And so the idea behind the phrase stochastic parrot was, first of all, no shade on the actual creatures, parrots, which are lovely But we have this verb in English to parrot, which means to repeat back without understanding. Stochastic is also doing a lot of work there. It means randomly according to a probability distribution. So a stochastic parrot isn't just repeating back verbatim what's in its input training data, but rather a probabilistic remix of what's there.

Dr Miah Hammond-Errey: Can you explain why intent and connection have so much meaning in language and communication, and what that really means for how we think about and use LLMs?

Dr Emily M Bender: Yeah, so intent and connection is why we use language, right? Sometimes we say rhymes just to make noise, right? It's soothing to young children to hear rhythmic sounds. And sure, young children are busy learning language, so they're going to get something out of that, but also just the sound. And we will sometimes use letters as graphic arts elements without really caring what words are being spelled. This is especially true if we're using letters from other people's languages, right? Think about all of those tattoos that use Chinese characters that are just absurd, right? But usually when we are using language, it's because we want to share an idea or make a request or connect with somebody over some meaning. And so we do that by choosing particular words. And the words are what's observable and what could be computed very easily. But they are not actually the stuff of language. And that's where it becomes problematic, because we have these systems that can mimic our use of words very well. And when we encounter that mimicked use of words, we approach it as if it were language with communicative intent, and we interpret it that way. And there's a whole other story I could spin for you about how that involves imagining a mind behind the text that isn't there.

Dr Miah Hammond-Errey: yeah. Could you take me there?

Dr Emily M Bender: Yeah, absolutely. So it is very easy to imagine that when we are interpreting someone else's language, something someone said, we are basically unpacking from the words, the meaning that that person packed into them. That's not actually what's happening. What's happening is we take everything we know about the person who wrote or uttered the words, everything we know about the common ground we share with that person, and everything we believe about what they must believe about their audience that might have been us or might have been someone else, depending on the situation. And given all of that, we then ask ourselves, what must they have been trying to convey by choosing those words and in that order? And what's really important there is that we're doing that reflexively and instinctively, and we can't help ourselves. And it crucially involves imagining someone else's mind. So when we see the output of a large language model being used to synthesize text, we're going to do that same thing. We're going to produce an entire imagined mind. And then we have to let go of it and say, that wasn't really there. That was all us projecting into the text.

Dr Miah Hammond-Errey: so much has been written about Eliza Joseph Weizenbaum 1960s chatbot and the risk of anthropomorphism. What you're describing there is is that but more it is this imagining of the other person's intent and somehow even I mean, in the case of Eliza, he specifically told people not to do this, and yet it continued to happen. Why is that?

Dr Emily M Bender: Yeah, I think it's because it just taps into how we process language, and both with Eliza and with our modern chatbots, they are designed to be extra anthropomorphic. There is absolutely no reason for either Eliza or ChatGPT to use the pronouns I and me. Because there's no I or me in there, right? But it's set up that way, and it's set up to be a turn taking system. It's very much leaning in to the way we do language with other people and leaning into our desire to anthropomorphize. And, you know, I think Weizenbaum, who first of all, like quickly warned about the dangers of this, but also was sort of the first one doing it could be forgiven. But the people who are building this now, and instead of creating technology that is transparent about its actual affordances, dressing it up to look as anthropomorphic as possible, I can't forgive that.

Dr Miah Hammond-Errey: Why do you think some people are so enamoured with the outputs of generative AI, and others are not?

Dr Emily M Bender: So this is really a question for a psychologist and not a linguist, but I'll give it a go anyway. I think that, uh, some people are really taken by the fantasy of artificial intelligence as it's been sold to us in fiction, and it's very easy to imagine that large language models being run as chatbots are close to that, because they can say surprising things by remixing their input training data that we then make sense of on just about any topic. So it feels like maybe you're talking to the Star Trek shipboard computer. That's one category. I think another category of people end up being susceptible to the way these things are designed to be sycophantic, right? They they are designed in this process called reinforcement learning from human feedback to just agree with the input that comes in. And I think to people feeling a certain way that can be reassuring and pleasant.

Dr Miah Hammond-Errey: I'm going to go to a segment. What are some of the interdependencies and vulnerabilities of language and technology that you wish were better understood?

Dr Emily M Bender: So there's actually quite a few. I mean, there's first of all, our own vulnerability to language as an interface. And I wish that people building, language generating technology were much clearer about the affordances of the technology, and also better educated on how people approach language. And unfortunately, though, we have this field called computational linguistics, natural language processing, the vast majority of people who do computational processing of text don't have much training in linguistics at all. There are other vulnerabilities that have to do with the fact that our biases get expressed in language, right? Everything that we are, um, thinking about, both sort of the overt like known bigoted things will be expressed in language, but also very subtle things. So Meg Mitchell makes the point that the in English, I think maybe especially American English, the bigram. So two word sequence woman doctor or female doctor. Super common that people don't talk about man doctor or male doctor. And that makes it seem like it is unusual for doctors to be women, which is hardly the case, right? But that very subtle way of thinking about it shows up in the way we use language. And if you are building language processing systems by computing statistics over large collections of text, you are going to pick up all of the biases in that text. And if you're doing this again probabilistically, statistically it's going to amplify those biases. And

Dr Miah Hammond-Errey: Are those biases replicated over different language sets or are there not enough examples?

Dr Emily M Bender: It would be really interesting to look, at the impacts of the training data and the way these things are done with multilingual training sets. So I think it's relatively common to say we are going to build a large language model that's got everything we can get our hands on, and it'll be mostly English, mostly American English, but then there'll be, you know, a little bit of, um, let's say Swahili. And it is possible to actually share information across the datasets in different languages. So on the one hand, that helps. It gives you a more robust system for Swahili. On the other hand, you're going to bring in a very American viewpoint. And the example I have to hand is actually with image generation, not text generation. But I remember seeing someone sharing a thread of, pictures of groups of people specified to be from different times and places and ethnicities, and they were all grinning for the camera the way American tourists do. Right. Because these were synthetic images. And if you had portraits of a group of people, well, that bias was coming in.

Dr Miah Hammond-Errey: That's really interesting. It was particularly how smiling is, not as welcome in some countries.

Dr Miah Hammond-Errey: recent research from MIT showed that over 80% of people using an LLM to write something couldn't remember the content within minutes. What are the implications for communication, decision making and leadership of that kind of use of technology?

Dr Emily M Bender: Yeah, I mean, one of the things that I've frequently found myself saying is that if somebody is sending me some text that they couldn't be bothered to write, why should I bother to read it? If you find yourself in a situation where it feels like it is so unimportant that paper mache would do, then I think we need to back up and say, okay, what's wrong here? Why am I writing a report that is that unimportant? But in general, I'm very worried for information ecosystem. And I usually think about it in terms of like public search engines, but also absolutely within government documents. Right. I don't want paper mache floating around in there alongside actual data.

Dr Miah Hammond-Errey: Absolutely. Um, you've kind of mentioned this in our information ecosystem a few times, and Synthetic content is making the challenge of sharing facts in a society of exchanging, you know, deliberations and consensus on things like democratic discussions, but also threats from state actors much more significant. And recently we saw Russia infecting a chatbot with pro-Kremlin messaging. These are some of the kind of the emerging challenges. Where do you think that could go, and what do you think we can actually do about it?

Dr Emily M Bender: So one of my really big worries is that the more we have synthetic text sort of flowing through the information ecosystem, the harder it is to find trustworthy information sources, and also the harder it is to trust them once we've found them. Right. And so I think it becomes very important to insist on authenticity, to insist on provenance. And I would love regulation that required watermarking of all synthetic media. And it doesn't mean that it's going to all be watermarked. We're never going to get there. But just like with actual physical oil spills in the physical ecosystem, less is better. Right? Just because we had the Exxon Valdez spill in Alaska didn't mean that the BP Deepwater Horizon disaster in the Gulf of Mexico didn't matter. And so the more we can clean this out by requiring it to be watermarked, I think the better. And we're also going to have to come up with systems for, you know, vouching for authenticity. Like that's the other direction to go. Is that especially for highly sensitive documents, to have a way of encoding this is real. These are the authors. Here's who's accountable for it. And if you know, this is the sort of reference copy of the document.

Dr Miah Hammond-Errey: Yeah, I often wonder if it wouldn't be easier to watermark originally produced content. I can see so many uses for that, particularly in government. from my perspective looking at that, that same information ecosystem or environment challenge is that at the same time as we can't verify information and sources are much more difficult to verify. We have seen a dismantling of trust in the traditional gatekeepers of those, you know, the institutions that have gatekeepers, that information. So whether it's media, academic institutions, political leadership, we've seen a decline in trust in all of those. And so that places us in a really difficult situation because we are losing the gatekeepers who, you know, have had many issues but also did provide this sort of authentication verification system with AI and the information environment as it is, we don't actually have any replacement options yet.

Dr Emily M Bender: Yeah, it's really, really rough. And, you know, it hurts me every time I see journalistic outfits making deals with OpenAI and or putting up these, like, chat bots to access their archives, it's like, why would you do that? Right. I've also spent a lot of time in the past few years speaking with journalists and really getting to see sort of what the work of journalism is and all of that verification. And yes, there's some gatekeeping. Yes, somebody is deciding what is newsworthy, but also there's all this work of fact checking and triangulating and.

Dr Miah Hammond-Errey: Yes, yes, a lot that actually goes.

Dr Emily M Bender: Behind. Yeah.

Dr Miah Hammond-Errey: Agreed. I think that's why for me, the work on stochastic parrots was so interesting and helpful is it really highlighted the way LLMS produce content is not actually cohesive or coherent. It is predicted not, you know, predicted specifically, but broadly tokenized. And so you just what's coming next rather than being something that actually has coherence. And so it's actually quite incredible that it makes so much sense.

Dr Emily M Bender: Um, right. But if it makes sense, it's because we're making sense of it. And we.

Dr Miah Hammond-Errey: Are.

Dr Emily M Bender: We're doing the hard work and we are incredibly good at that. Right? We are meaning makers. We're going to do it right. We will look at something that's fully random and tell ourselves there's a pattern there. And so if it is also speaking our language. It is really hard to like, take that step back and say, no, that's us. That's us doing the sense making and doing the sense making of something that has no accountability, that has no authenticity, and that actually does not deserve our attention.

Dr Miah Hammond-Errey: I've been warning and concerned about the the rise of use of AI, um, voice searches for this specific reason because, I mean, it's already problematic in the written form. But when you get to that, that next level of interaction that has a whole different feeling and there's no ability to easily verify the source, I find that is just such a big step in the wrong direction. Um, in terms of security, in terms of being able to identify and analyze and really interrogate the sources behind the, you know, the information.

Dr Emily M Bender: Yeah, absolutely. I have a couple of papers with Chirag Shah, who's at the Information school at the University of Washington. One's called Situating Search, and the other one's got a longer name that I can't do off the top of my head, but we're basically talking about how you actually don't want an answer machine, right? That's not the world we live in. It is not the case that any question you could think of has an answer that is knowable by a person or a machine. What we need is the ability to situate sources in their context, in the broader landscape of information, understand how they speak to each other, understand what's reliable for what kinds of sources. And the example I like to use is imagine you put in a medical query and using an old fashioned search engine, you got back a link to the local university medical center. You got back a link to a really famous medical center. You got a link to a famous quack doctor, and you got a link to a forum where people going through the same sort of experience talk to each other and each one of those things you're going to evaluate slightly differently. And each time you see them, you're going to sort of add to your evaluation of them. Any given piece of information that was pulled from one of them without context would be useless. And on top of that, if you think about the forum, right, any particular piece of information that someone said, well, in my experience, my doctor said. Right. Huge grain of salt there. But the chance to actually connect with other people. Going through the same thing, that's priceless. And if instead of all that, you had an. Answer machine, it's isolating and really not helpful.

Dr Miah Hammond-Errey: Do you think then that people are also seeking connection? From this process?

Dr Emily M Bender: Yeah. From conversation and from chatbots. Absolutely. I mean, we're starting to see lots and lots of that. We're seeing Mark Zuckerberg selling it. Right? He's saying, oh, well, people have demand for more friends than they actually have. So we're going to fill that need with in quotes AI systems. And it's like, no, um, Chris Gilliard had a wonderful way of saying it like, that is basically a bid to make sure that everything we do, including socializing, is mediated by big tech.

Dr Miah Hammond-Errey: go to another segment. It's called the Contest Spectrum. What's a new cooperation Competition or conflict you see coming in 2025.

Dr Emily M Bender: Uh, yeah. Um, so I'm, I generally don't do predictions. I do hopes and fears.

Dr Miah Hammond-Errey: Okay. Let's have hopes and fears for 2025.

Dr Emily M Bender: Um, so I fear that the US government is going to continue to just, you know, drop bombs around the world and in particular, the president of the US, who does not have the authority to do that, is going to keep doing that. Definitely a kind of conflict that is I'm afraid of. Um, and I hope that people can find common cause across borders and across sectors, and understand that the concentration of power in the hands of a few is hurting all of us.

Dr Miah Hammond-Errey: You are in the middle of a book tour. What are some of your reflections and insights so far on the book

Dr Emily M Bender: So one of the most fun things about the in-person events on the book tour has been meeting people who listen to my podcast like they actually exist out in the world, which is really cool. That podcast is called mystery I Hype Theater 3000. Um, I think that a, another really positive thing has been to see how many people sort of had the sense that they didn't like this stuff being marketed as AI. And we're really glad to see our book sort of laying out the arguments and giving the language for, um, why they feel uneasy and what they can do about it.

Dr Miah Hammond-Errey: Well, let's go to what is AI and what it isn't. in the book, you highlight the diversity of products being sold as AI and how there are, in fact, more precise definitions. You focus instead on what is being automated by the product being sold as AI. Can you take us through these categories?

Dr Emily M Bender: So the term artificial intelligence is a marketing term. It's there to make you think that this thing is magic and that all of these different technologies are actually one thing that must be really amazing, because it can do all this different stuff. But in fact, we can break it down into multiple different things. there's different possible ways of doing this classification. But this one is one that seems to cover most of the technology. So the first batch is called decision making systems. And that's basically setups where you're using computers to automate consequential decisions. And it might not be language based at all. It might be. We are going to. So there's the famous, uh, compass system for, uh, recidivism prediction in the US court system. And that just had this long questionnaire. So a bunch of data points. We're going to do some statistics over that and output a system famously biased. Right. These days people are also using chatbots as decision systems. So here's sort of looking at well, what are you trying to do here? What are you automating.

Dr Emily M Bender: And another one is classification. So you might be saying, okay, I've got this pile of, of um, photos on my computer. Where are all the ones with grandma in it? So we're going to classify them based on who's in the picture. Um, or you might be doing this for surveillance purposes. Right. So, um, let's classify, um, I don't know, cars by how fast they're going, for example. Um, also advertising. Right. Classify web users for targeted advertising. Um, another type is recommendation systems. So what's showing up in your TikTok feed and your Netflix queue? Um, Fourth type is the transcription translation ones. And some of these are incredibly useful. So we're going to take a spoken signal and create a written form of that sometimes called automatic speech recognition or speech to text. Quite useful. Um, might do machine translation from one language to another or also image style transfer to take that selfie and make it look um, for example, in here says look like an anime character. And this is before OpenAI basically stole the IP from Studio Ghibli. So there's that. And then the last category is what's sometimes called generative AI. This is the the synthetic media generation text or image.

[Dr Miah Hammond-Errey: I think as I was reading it, I felt like these were such a great place to start because they focus on why you're using the technology in the first place. I want to ask you about the space between things. So when we talk about AI or the use of AI to automate things and, you know, kind of going along about the concern of job losses, I feel like so often it's the liminal and critical space between the tasks that gets lost. I think we're more than a compilation of tasks in our value creation and employment and in, human flourishing. It's not really possible to reduce them to a list without losing something quite meaningful. I felt like that was such a great place to start, because it highlighted that, that there are so many pieces between those tasks lists. How do you see and think about this space?

Dr Emily M Bender: Yeah, absolutely. I mean, you can look at the work that a person is doing and like itemize tasks for sure. And then sometimes those tasks can be automated, right? Um, often with a loss of connection or community. So if you think about self-checkout systems, the grocery store, um, you haven't actually automated anything. You're just having the customers do the task rather than an employee, and you don't have this point of connection anymore.

Dr Miah Hammond-Errey: you cover a lot of ground in the book, and it is in lots of ways, an introduction to the topic for people who might have felt that it was a bit out of reach or confusing.

Dr Emily M Bender: So I think one key message is you don't have to be the kind of person who builds this technology or uses this technology to have an informed opinion about it. That in many of these cases, when we're talking about automating something, it's the people who are going to have decisions made about them, the people whose job is being automated, um, or the people who are otherwise having the technology used on them that actually have the most informed opinion about how that should go and if it should go at all. But there is this really common trope from the tech pros, which is, oh, you don't understand this. You know, you're not qualified to talk about this. And it's really frustrating when the policymakers absorb that message and turn only to the technology Companies for information and input into decision making about this.

Dr Miah Hammond-Errey: Which has definitely been happening more and more. And obviously we've seen, you know, the retreat of the US administration from many of the multilateral forums, which which then makes it globally quite challenging. I might actually jump to a segment.

Dr Miah Hammond-Errey: What alliances or collaborative relationships do you think will be most important, particularly in the context of human technology interactions in the next few years?

Dr Emily M Bender: So I think alliances among people who are being impacted by the technology and between people who are developing technology and those impacted by it are going to be super important. And when we talk about, um, you know, so-called responsible AI or this like, you know, ethics and technology, a lot of it comes down to connect with the people who are being impacted, either because they're the ones using the technology or it's being used on them in some way. And figure out what they need in this situation.

Dr Miah Hammond-Errey: I also see in the book you talked about strategic resistance, and I want to ask how much of an effective strategy that really is. You know, most of us use AI embedded in our phones and devices every day. Sometimes we have no choice. Sometimes we have the opportunity to opt in, and many, including some of my guests, have suggested by trying and playing with technologies and intimately understanding their weaknesses. We make better policy. If we have policy makers who are so distant from the technologies and don't actually deeply understand their flaws and potential, we end up in a vicious cycle of relying on the loudest, most well-paid voices.

Dr Emily M Bender: Yeah. Yeah. Um, so I feel there's a lot in that question, right? Do you have to play with the technology to understand it? Um, I think it's important that you can't see all of the flaws just by trying something out, right? Especially if you are using something like a chatbot for an information access system. The minute you ask it something you don't know the answer to, it's going to look convincing. It's going to look helpful, right? So I think that it is.

Dr Miah Hammond-Errey: Having the expertise to interrogate the answer is really critical.

Dr Emily M Bender: Yes. Um, but also you say, you know, we we use AI all the time. And again, we have to disaggregate that term, right. So, you know, if you are using the speech to text system in your device, right. If you're using automatic transcription, that's not AI. Right. And that's a very different thing than, um, doing, you know, going to ChatGPT with some information you need rather than using a traditional search engine or going and talking to a librarian. Right. There's that human connection again. And so I think it is worth being thoughtful about in our own usage. When do I take advantage of automation and why? What's being automated, whose labor was exploited to create that automation? And unfortunately, there are systems that used to be done in, I think, more ethical ways. So automatic transcription, um, doesn't require a GPT three scale large language model, right? To the, the, um, transformer architecture. That's the technical term for the T, and GPT um is a useful technological innovation for various applications of language modeling, including automatic transcription. But we have overshot the size of the data set. That would be useful for that. And the way things are being done now beyond the transformer innovation leads to these weird systems where if you put a pause in your speaking, maybe this happens a lot to people with a stutter.

Dr Emily M Bender: For example. The language model part of the system will just keep outputting text that's got no correspondence to the input. Silence, right? So there are better ways of doing automatic transcription. It's something that we've had for a while now. And I don't have any particular objection to it. Again, needs to be used carefully. Right. If you don't have access to the original signal. I was talking to a journalist. It was over, I think Google meet. And before the journalist joined the call, there was this like Google Assistant thing that joined the call. It was something that was doing an automatic Summary of the meeting, And I'm like, no way do I want my words that you might be quoting push through this paper mache machine, right? Please record it. You can use automatic transcription and then go back and check.
Dr Emily M Bender: what are you missing out on? Not taking notes yourself, right. If you haven't done the sort of cognitive processing of saying this part matters to me, but instead of offloading that to something that's going to get things wrong, depending on the accent, it's not going to have your sense of what's important in the meeting. This comes back to the thing about like people writing stuff and then not knowing what it said because they didn't write it.

Dr Miah Hammond-Errey: It's also just foundational that if you want, you know, as a as a journalist or as a public figure, if you want to record something someone said to you, you record it and transcribe it verbatim. I'm really interested in, this cognitive atrophy and the change in cognitive function as we interact more and more with different kind of, um, you know, AI technologies. How do you see that? And are there, you know, are we at risk of homogenizing our linguistic in our language?

Dr Emily M Bender: Yeah. So the cognitive atrophy question, again, a question for a cognitive psychologist. So I don't have the expertise there. But I think there is something definitely going on with homogenizing language use. And especially when people are turning to something like ChatGPT because they are insecure about their own language use. Think about people for working in a second language or someone who's, you know, L1 English speaker sorry, technical term. Someone whose first language is English, but they speak a stigmatised variety and they don't feel like they, you know, write or speak the way that they're expected to. And so everyone's turning to this one system that's kind of this bland, like, you know, what you expect to find on LinkedIn kind of way of writing. And that's a loss, right? It's a loss to the community that has to read the stuff. And also, it's a really serious loss to the people who are continually then reinforcing for themselves. My way of speaking isn't good enough. I have to do this other thing.

Dr Miah Hammond-Errey: What are your key messages for policymakers in Australia who are also facing the reality of regulating emerging technologies?

Dr Emily M Bender: Yeah, so I don't know too much about the Australian context, but the first thing is look to existing regulation and keep applying it. Right. Just because there's some automation being used, that doesn't mean that the existing regulations don't apply, and the US Federal Trade Commission under Lina Khan was doing a great job of that. Um, so that is definitely a possibility. Um, I think a second thing to keep in mind is that, um, the tech companies like to say this is moving too fast. You can't possibly keep up. To which I say, the goal of regulation is to protect rights. And rights aren't changing that fast. And so it's not a question of regulating the the internals of the systems, but actually looking at the larger social systems that they're landing in and seeing sort of where do we need to shore up protections. And then I want to sort of add a couple of points. One about transparency. Right. We could regulate, um, the need for disclosure when a system is when automation is being used in some way, the need for recourse. If you automate something, you have the possibility of scaling harm when it's making bad decisions. So there has to be, um, abundant recourse for someone who has had a bad decision made about them. So they're not standing in line with 500 other people with that same problem. Right. So transparency and disclosure. Recourse. Accountability. I think it's important to be clear about who holds accountability. Um, and there was one more that was coming.

Dr Emily M Bender: Oh, yes. Data minimization. Um, so, uh, it turns out that just like radioactive material, if you accumulate data in one place, it starts to become toxic and dangerous. And so if we can go from, data being the new oil to, um, let's actually not let the data collect, I think it would be a better world.

Dr Miah Hammond-Errey: I'm going to go to a segment called Emerging Tech for Emerging Leaders. What do you see as the biggest challenge challenges for leaders in this current technology environment?

Dr Emily M Bender: I would say that leaders really need to be concerned with that information ecosystem, that what's happening with the synthetic media machines is that we have just this enormous surplus Plus of useless synthetic media, but it looks very useful and therefore gets passed around as if it were actually something valuable. I think another thing that leadership should keep its eyes on is, um, career development pathways. So the more you can automate the kinds of tasks that tend to go with entry level positions, the more we lose the bottom rungs of the latter and we lose the chance to sort of do career development, to have the next generation of leaders.

Dr Miah Hammond-Errey: I think that's something that is under discussed, that when you take out these more junior tasks, you're actually taking away roles from people who were your future leaders. And how do you continue to recruit and retain those people and then, you know, create a pipeline of incredible people, hopefully working in your organization?

Dr Emily M Bender: Exactly.

Dr Miah Hammond-Errey: Coming up is eyes and ears. What have you been reading, listening to, or watching lately that might be of interest to my audience?

Dr Emily M Bender: There are a couple of wonderful books I'm happy to recommend. One is a couple of years old, but it's called unmasking I by doctor Joy Buolamwini beautifully told sort of memoir, but also technical book about her experience starting off as an AI researcher and then becoming really an activist against the use of facial recognition technologies. Um, Karen Howe's recent empire of AI is also wonderful. And then, um, Adam Becker's More Everything Forever is a fantastic exploration of the ideology behind a lot of these companies.

Dr Miah Hammond-Errey: I look forward to reading it. Um, let's go to another segment. It's called disconnect. How do you wind down and unplug?

Dr Emily M Bender: Um, so I'm a runner, I go for runs, and one of my favorite things is to actually go for runs while I'm traveling and then, like, stop and take pictures along the way, and I call those picto runs. So that is really fun.

Dr Miah Hammond-Errey: what do you see as the security concerns? of LMS in a, in a security context and national security context.

Dr Emily M Bender: Yeah. Um, so one of the things that LMS opens up is basically a massive like denial of service attack against information systems. Um, so you can think about this in terms of like Wikipedia. It becomes very easy to write Wikipedia, to write in quotes, to extrude Wikipedia style articles. Right? And to do edits that the actual Wikipedia editors couldn't keep up with, for example. And so you could imagine. And again, security is not my area. But I could imagine someone feeding in synthetic text that looks like things that informants might be saying and really muddying the information landscape. Another example comes from a paper by McGuffey and Newhouse, which I think came out in 2020 because we cited in stochastic parrots where they were talking about using GPT three to populate message boards where extremists get recruited. And that was sort of like a good use case, good in quotes, right, for synthetic text because nobody cared if it was accurate. It was just about, um, can we make it feel like there's a lot of people in here having a conversation with all of the appropriate sort of buzzwords and keywords. So that was really frustrating. They found out that, yes, like it could be used that way. Um, and then similarly, you can end up with disinformation campaigns that effectively hide the real information In in many, many, many slightly divergent copies. And so you've got like a real news article about, let's say, an infectious disease, and then you've got ten different versions that are slightly different. And it doesn't matter that none of them are particularly convincing because it makes it impossible to find the one. That's right.

Dr Miah Hammond-Errey: Thank you. I am going to go to my final segment called Need to Know, but before I do, I just want to highlight something that you have said, and that is that rights don't change that much. And I think we so often forget in the pace of technology and innovation, which can also be super exciting and fun, that rights and human needs don't actually change at the pace that we need to think about them differently. Yes, they do evolve over time, but by and large, I think that's a really profound statement that we need to hold up, to legislators and say the technology may be changing, but our fundamental position on human rights, on democratic values and principles are not changing. And these technologies need to be consistent with that, particularly in Australia, in the context of our democratic, values and principles which we have set out, many of which current tech platforms are inconsistent with.

Dr Emily M Bender: Yeah. And you can put the burden of proof on the people producing the technology.

Dr Miah Hammond-Errey: Right. So is there anything I didn't ask you that would have been great to cover?

Dr Emily M Bender: I just have one more answer to what policymakers could be doing. Another area which feels very different, but it's actually quite related, is protecting workers rights to organize, because workers are often the ones who understand what's going on on the ground. And a lot of the most effective pushback against automation comes from workers who have been able to organize for collective bargaining.

Dr Miah Hammond-Errey: How do you lean in to the value of human human connection in such a digital era?

Dr Emily M Bender: So this is a good question, I think part of it for me is I do my very best not to consume synthetic media. I spend a lot of time online because I do a lot of connecting with people through digital mediums. And in fact, my co-author, Alex Hanna, and I started working together online in 2020. We were part of a larger group that wrote three academic papers. We've written several op eds together. We've produced, um, now 50 some odd at the point of the story I'm going to tell you. We had already done 50 plus episodes of our podcast and written a whole book, and then we finally met in person. So computer mediated communication is fine, so long as you are connecting with people on the other end of it.

Dr Miah Hammond-Errey: Um, Emily, thank you so much for joining me today. It's been a real pleasure.

Dr Emily M Bender: Likewise. And thank you for putting up with a very sleepy guest.