Computer Says Maybe

Earlier this year Abeba Birhane was asked to give a keynote at the AI for Good Summit for the UN — and at the eleventh hour they attempted to censor any mention of genocide in Palestine, and their Big Tech sponsors. She was invited to give her full uncensored talk at Mozfest.

Further reading & resources:
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Post Production by Sarah Myles | Pre Production by Georgia Iacovou

What is Computer Says Maybe?

Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.

[00:00:00]

Abeba: The ITU. Is it's different, A really weird branch of the UN where they are completely captured by take. So you go to the conference and you feel like you are in one of the, you know, big machine learning conferences, like, and check out the new cyber truck. Yeah, exactly.

Alix: All right. We were at MozFest last month. I had some amazing conversations, which we are releasing. On our feed all this week if you miss Moz Fest. We also made a special episode that hopefully will make it feel like you were actually there, and that's on the feed right now. So if you haven't heard it, go back and give it a listen.

This is Dr. Abeba Birhane cognitive science heavyweight, talking about how the un censored a keynote that they invited her to give. And she got a chance to give the full talk uncensored at [00:01:00] MozFest instead. So here she is to tell us about what happened.

Hi Abeba. Hi, Alex. Alex. So you gave a keynote yesterday and I. Have it on good authority that it was very similar to the speech that you were attempting to give, um, in a UN forum recently. Do you wanna say a little bit about the subject matter of the talk and then maybe we can talk about what it's like to be in a space where maybe it was received a bit differently?

Yeah, yeah,

Abeba: absolutely. So I was invited to give a keynote in Geneva at the UN Summit, AI for Good Summit. This is a really significant event every year. They gather thousands of people, some of them high profile people, you know, uh, politicians, government bodies, regulators, and so on. But most importantly, you see predominantly AI [00:02:00] companies, tech companies.

There was demos and shows including a Tesla car in the middle of the demo, and you see kind of lots of robotics work and various telecommunications and so on. So basically, this is. The annual event where most people who claim to work on AI for social good gather. And I have kind of known the community because I used to be part of the UN AI advisory body.

I have interacted with, you know, the people from ITU and every year they would invite me, but I'm like for either, you know, fireside chat or panel or something, and. I was never compelled to go, but this year they asked me to give a keynote speech on the main stage, and I faced knowing

Alix: all of your views on exactly a lot of these topics, which Exactly, I mean, well, did they, had they done their research, I.[00:03:00]

Abeba: That I kind of figured, well, they know me and if they're offering this opportunity, maybe they are starting to think critically. Maybe they are starting to create the space for adversaries that actively challenge them, because I think that's one of the most important characteristics of self-improvement, actively inviting different views.

However, about two and a half hours before my talk, I was summoned to, to kind of go through my slides and basically I was sitting down with three men, big men in black suits, flicking through my slides one by one and, and going, sorry, can't say that. You can't present that. You have to remove that. So it was, we entered the negotiation.

What were, what were they drawn to in their corrective energy? So they were, they didn't want anything about Palestine, Gaza, Israel [00:04:00] mentioned, uh, they didn't want logos of their sponsors, which is basically, you know, uh, big tech corporations. They wanted logos of Microsoft, Google. Amazon Palantir removed from the kind of genocidal association I was making on the slides.

So these were the main objections. They didn't want me to use the word genocide. They asked me to replace it with war. So not only did I have to remove anything. That mentions Palestine, Gaza, and Israel. They also warned me not to say any of these words on stage. Usually I do not compromise this much, but I was not really expecting this level of censorship and scrutiny, and I was literally shaking.

So I com. In retrospect, I would have said, you know, fuck it, I'm

Alix: not doing it. But also the nerves of being about to give a talk like that. Yeah. And then. Because you're already in like a nervous system. Exactly. Regulated state. And then someone [00:05:00] is like, I'm now gonna ask you to do something. Totally unacceptable.

Um, I imagine, yeah. Yeah. So that must have been so intense.

Abeba: It was either withdraw my talk or, you know, censor a lot of the content that they didn't want were they UN officials? UN officials. So even one of the guys was the deputy secretary of the ITU. So

Alix: basically he's, he's very higher. And there have been, the UN special repertoire has made a determination of genocide.

No.

Abeba: Yeah. Yeah. I think the ITU is, it's different. A really weird branch of the UN where they are completely captured by take. Okay. So you go to the conference and you feel like you are in one of. Them, you know, big machine learning conferences like new Yeah. And check out the cyber truck. Yeah, exactly. You feel like, is this new reps ICML or you know, actually an AI for, for social Good event.

All their sponsors are big tech corporations. After the fact I did some analysis about half, uh, of the main stage speakers come from big [00:06:00] tech, from industry, so. I think the conference has lost its original purpose or its vision. Now. They really are kind of in the AI race and they really are catering for their sponsors.

They are invested in scaling, making it bigger, making it more visible, garnering as much attention as possible. And really determined no matter how valid or how good an AI tool is, they're really determined to ensure they are using AI to solve. When I say solve, I'm saying in a quotation mark, any societal problem using ai?

Yeah.

Alix: Whatever that means. So, okay, so you have some un, I don't know, like bruiser types telling you, telling you to change, telling you to change all your slides. You're about to go on stage. There's like maybe over a thousand people in an audience that probably don't want to hear that. Big tech is complicit in genocide.

They know it, but they don't wanna hear it. They want [00:07:00] to hear some booster speech about how AI's gonna solve all of our problems and isn't that great? So you go on stage and you deliver the speech. What happens?

Abeba: I go on stage and I was wearing the kefi. And I see people standing up and living. I was. Like, sweetie, I bet.

My God. And yeah, I delivered the speech. It was a bit incoherent because I have to be careful about, that's impossible around my words, but Yes, but I, I did it anyway and I got a massive applause at the end because one of the slides I. Kind of drew the boundary is I had a slide around BDS. I had the logo of all the companies involved.

You kept that. I kept that. So that was, you know, if they wouldn't let me keep that, then there is, it's not worth giving the talk at all. Some people in the audience really appreciated that. So the talk ended. I went off and some journalists picked up the fact that I was censored. They reached out to ITU and and so on.

They [00:08:00] confirmed Meredith with Ker was speaking right after me, so she was there as well. So coincidentally, I also had a witness, so they couldn't deny what a witness to have. Also. Exactly. Uh, so this was kind of picked up by journalists and I wrote a blog about it. I was still shaken by, by the experience, I'm sure.

Then, uh, it kind of got around my blog post and Nabiha picked it up and said, I'm sorry, you're TOCA censored. Would you like to come to face and give it? But there no censorship, the uncensored version. So yesterday I gave that talk Uncensored version in a much more relaxed, uh, friendly, supportive, friendly environment.

Yeah. Where I don't have to apologize, where I don't have to tiptoe around, you know, words such as genocide or what's happening in Palestine or Gaza or Israel's contracts with companies like Google and Microsoft. How did it feel to. Do the full [00:09:00] speech. I felt like myself. It felt like I know this talk, I wrote it, and I am in an environment where.

People understand even if they may not agree with me. So that's a really nice feeling. The talk, because it was written for the ITU community, for AI, for social good, it was really centered around, you know, the UN's sustainable development goals and how some of them have stagnated and some of them are actually decreasing, doing worse.

So these are things like, you know, peace, injustice, no hunger, gender discrimination, and so on. In my talk, I kind of went through a long explanation as to how AI is actually exacerbating these issues rather than. Helping solve them. So I presented, you know, some empirical work that looks at how AI systems, particularly generative ai, tend to encode and exacerbate historical [00:10:00] norms, you know, existing societal attitudes and so on.

I walked the audience through, you know, the, the process of data collection, detoxifying data sets, curating, you know, benchmark data sets and so on. How. All that process in and of itself is really structured in a way that disproportionately discriminates against or disproportionately negatively impacts, you know, languages, cultures, concepts that are used by communities at the margins of society.

If you look at, for example. One of the data detoxification methods is creating a word list. This is what Google has done with their C four data set. This is a massive data set. They created a list of bad words that they can, then I think they have over 400 lists for the C four data set. That was supposed to be like dirty op C In bad words.

The idea is like to blanket ban or to to remove everything based on those list of keywords [00:11:00] and. What auditors did was look at the content that was excluded, that was deemed as toxic and what the core finding of the audit is. Basically that language that was used by, you know, African American language speakers, you know, websites from.

RAP lyrics, sites, and so on, language that was used by the L-G-B-T-Q-I community to communicate sexual health and so on was disproportionately removed compared to, you know, other language that might be used by language that adhere to the status quo. So the idea here is that if you look at from the AI system that.

That's kind of built and being deployed the entire process from data curation, data cleaning, you know, data management. But also if you zoom out and look at the entire extractive process where annotators data labelers are paid very little, but they do emotionally taxing work, you have the environmental [00:12:00] impact of, uh, you know, massive energy and water use of these data centers.

When you look at the entire process, the entire stack, what you find is that the take is built on extractive structures. These technologies, in and of themselves are structured in a way that encode an exacerbate existing societal injustices, then attempting to build, to convert these systems. Into something that can be used for ai, for social good is like, you know, it's

Alix: laughable.

Abeba: It's, it's like building with rotting wood and attempting to build a palace. It's, it just doesn't make sense

Alix: the way that you bring empiricism to a vibes fight. Um, I have always had great respect for, and I think, do you think this is gonna turn at some point where people. Internalize. It just feels like there's an emotional need that people that work on AI for social good or the companies that support these kinds of things, [00:13:00] that wanna construct a narrative around these technologies that maybe they're the path through a set of really complex problems.

Do you think that. I don't know, that feeling, that assertion, that inflated conception of how these technologies might make the world better. Do you think it's gonna change? Like do you think there's gonna be like a dam that breaks and people say, actually this is disgusting.

Abeba: I'm not sure you can understand.

People inherently want positive stories. You know, feel good things people want to use, at least they want to try their best to, to use these technologies for good. But unfortunately, even with such noble intention, unfortunately that's misinformed at so many levels. This comes at, you know, misunderstanding of how a lot of the social issues are.

Inherently complex that revolve around, you know, understanding culture, history, background, and most importantly, it's not take that [00:14:00] they require. It's, you know, the political will. It is tackling problems from the root, which tends to be structural. So it's changing structures. It's first and foremost, you need the political will, not the technology to, if you really want to address these, these issues.

It also comes from. Misunderstanding how these stakes are made, so to speak, breaking down and uh, looking at what are the components that make this AI and what is, you know, empirical work that is doing rigorous systematic taste. And what is that saying? So, yeah, it comes from misunderstanding of both social issues and what these technologies are and what they can do.

Alix: So final question. If I am a recent graduate from university and I've maybe I've done a social science degree and I wanna volunteer, and I'm really excited about all these new technologies, and I think there must be a way that they could make the world [00:15:00] better, and I am interested in this as a possible project or endeavor, what would you say to someone who's in that position and they're.

Excited by these opportunities and they're sort of eager to quote unquote help.

Abeba: This is tough because it's so relatable. I get so many students, I'm sure. Yeah. So excited. And so you have to be careful. I mean, I'm always happy punching up and kind of calling out any misunderstanding and injustices if someone.

A level higher than me, whether it's, you know, in academia, whether it's, you know, the head of school or a professor, or if it's, you know, a, a taxi or I have no problem pointing out, you know, you are wrong, you are talking shit. But when it comes to, I have a principal, I have a policy in my lap, you know, always punch up but never punch, punch down and always kind of support and provide.

Constructive guidance, mentorship to upcoming scholars. I don't wanna break their spirits, so this is really tricky. I [00:16:00] actually, yeah, I actually have, uh, people like that in my lab and I would do my best to kind of gently nudge them and show them, or to gently nudge them so that they can see for themselves.

How difficult or sometimes how absurd the kind of project they want to embark on. And you just hope that, you know, they will realize it. You just hope that they will develop that critical thinking that will let them see you know, things for what they are. And that's all you can do. Especially if they are, you know, junior upcoming scholars.

Alix: Yeah. Which takes time. And I think the advantage that a lot of these other spaces have is that very quickly they can get people excited with superficial ideas of, um, making the world better. Which is this like endorphin hit of like, I can be engaged and I can have agency in this very complex Exactly.

World with lots of problems. It's, it's human

Abeba: nature. We want to. The simplest explanation and we want to feel good. We want to do good.

Alix: Well, that's all I've got for you, but thank you so much both for your leadership in high profile settings. [00:17:00] We're saying the difficult thing is I'm sure very, very difficult to do.

I'm glad you had the chance to like, I don't know. Thank you. Be your full self, um, and be in a space where people were supportive of what you had to say.

Abeba: Thank you. Thank you. I always appreciate talking to you. Thanks, Alex.

Alix: Yeah, thank you. As usual, thank you to Sarah Myles and Georgia Iacovou, and a special thanks to Mozilla for letting us take up space, uh, at their festival with a little recording studio and a little gazebo.

And thank you for the audio engineering team that helped staff it. It was a very lovely experience. Coming up on the feed tomorrow is Luisa Franco Machado.