Computer Says Maybe

It’s our second week of playing AI lingo bingo. The summit in India is underway and the air is thick with vague terms that fail to describe the big problems.

More like this: Lingo Bingo at the India AI Summit w/ Karen Hao, Joan Kinyua, Chenai Chair, and Rafael Grohmann

With us this week to discuss co-opted terms is Meredith Whittaker on how ‘open source’ cannot meaninfully be applied to AI systems; Audrey Tang on ‘democratisation’, something which is both helped and harmed by AI; Abeba Birhane on everyone’s favourite slogan ‘AI for Good’; and Usha Ramanathan to discuss ‘AI and development’ in the context of the Aadhaar project in India.

Further reading & resources:
**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Pre Production by Georgia Iacovou | Post Production by Sarah Myles

What is Computer Says Maybe?

Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.

Alix: [00:00:00] Hey there. Welcome to Computer Says maybe this is your host, Alix Dunn. And in this episode we are going to go through part two of our three part series on AI lingo Bingo, which we prepared in advance of the India AI Summit. Uh, and the idea behind it was there's all these terms and concepts. That are actually deeply important for, uh, engaging meaningfully on the politics of ai.

But in spaces like summits, these words get twisted and flattened and sort of contorted into something that's almost unrecognizable. Um, so we decided to sit down with 12 of our favorite people who are really expert and understand the importance of those concepts to kind of give us a bit of pre propaganda, um, before the summit turns these terms into mush.

And this is the second. Audio episode where we've cut together those interviews for your ears. Um, for those of you who were in Delhi, I presume you're on your way home. So hopefully this is very [00:01:00] well timed and that some of the stuff that folks cover resonates. If you haven't heard the first episode, do check that out.

I think it's pretty evergreen. So if you are trying to, in the future, also try and help someone who is wrapping their head around what AI politics means and. What are the important concepts we should be thinking about? Hopefully it's a good one that we can share for many months to come. I wouldn't say years.

Uh, 'cause who knows? In this episode, we're covering four topics, which I'll run through before we jump into our first one. We've got open source with Meredith Whitaker, and Open source is a technical term that used to mean something, which we get into why it's being used in the way it is being used now.

AI for development with Usha Ramanathan, who's kind of a legal legend who has litigated. And provided legal analysis for years on aha, the digital ID system in India, to make sure that it's protecting the rights of Indians. And if you wanna learn more about that, we have a Vapor State episode where she digs in much deeper.

But in this one, she's gonna be [00:02:00] talking about AI for development as a frame. We've got democratization with Audrey Tang, which is obviously the busiest of buzzwords. And finally, AI for good. With great friend of the pod and principal investigator at the AI Accountability Lab at Trinity College Dublin, Dr.

Abeba Birhane she's gonna kick us off by explaining why AI for Good is such a bad frame.

Abeba: I think there are multiple layers to it. At a higher level, oftentimes. The idea is to extend AI tools to solve complex socioeconomic political questions. So fundamentally, these are not questions that can be solved by AI or any other technology. Because at the end of the day, these are like issues around UN's sds.

These are sustainable development goals. Things like no [00:03:00] hunger or removing gender violence or access to education and so on. In the case of, you know, no hunger, for example, this is SDG number two. You simply have to provide food or you simply have to create the environment for people to grow their food and to sustain their livelihoods.

And yeah, with no gender violence and gender equality and so on, these are inherently issues that require political will issues that require, you know, restructuring existing systems, issues that require, again, you know, political negotiation and so on. So, AI tools or any other technological tool simply just does not solve these problems, does not address these problems.

But when you come down another level, also, you look at current AI systems from large language models to various simple tools that are used [00:04:00] in hiring or even in uh, government, in public service operations. And what you find is that these tools, you know, from the data that's used to train them to how they are benchmarked.

To how, how they currently work. You find that these AI systems are inherently also built with data sets and with Ideologists that actually exacerbate equality, that highlight discrimination, that enco and exacerbate, you know, societal norms and stereotypes and so on. And what you find is that. Even if you have good intentions, even if you have problems that can be solved by AI tools, currently the AI systems we have are geared towards maximizing inequality.

Are inherently tools that kind of incode and exacerbate [00:05:00] all of societal issues. So trying to kind of use those tools to solve whatever social good problems we have is simply, I don't know how to put it.

Alix: I love your expression of, it's like trying to build a palace with rotting wood. Yeah, exactly. I think that's a really lovely metaphor for describing.

How much of a fool's errand it is to try and build on top of these systems that to solve problems that are in fact embedded within the various systems that you would be using as kind of naive solutions.

Abeba: Exactly. So for these reasons, and at even a much higher level, if you look at major companies and corporations like Microsoft and Google who have been creating, you know, their AI for goods.

Their AI for social good initiatives on the one hand, but on the other hand, you know, they are the very entities that are exacerbating inequality. They are the very entities that are [00:06:00] powering genocide, powering war. They are the very corporations that are exacerbating environmental destruction. So, you know, the entire efforts, it becomes a bit of an oxymoron to use your term.

I think the governments, whether they are from the developing world or western governments, tend to see AI as an economic opportunity that they should be on board. And most of them also see AI in, in light of national security, in light of, you know, national defense and so on. Unfortunately, at these summits.

There is very little discussion around, you know. Are the claims or are the positive claims that we are aiming or we're hoping that AI will bring? Is there any empirical evidence? To what extent is the empirical evidence sound, and a lot of the claims are made [00:07:00] by either, you know. Big tech corporations or AI vendors who have invested interest in ensuring that there is massive AI uptake in that, you know, these products are integrated across society and there tends to be very little, uh, approach.

There tends to be very little. Let's see the evidence, you know, where does this. Claim come from.

Alix: We'll come back to Abe a later in the episode. But for now, let's move over to my conversation with Usha Ramathan, who's a human rights activist working across law and poverty in India. And she spoke to me about AI and development in the context of the AHA project in India.

It kind of does everything that Abe a just described. Um, it was embraced by the government. With very little scrutiny, and then it was deployed to the population with very little care. Here's Usha now on how governments will still stay uncritically positive about these systems, even when the harms are staring them in the face.

Usha: This is the Reem that's been sold to the Indian state, [00:08:00] and the Indian state today wants to be leading in something and it seems like they can't do it in the digital space, which is also why talking about, you know, providing platforms or actually resolving the issues. He's proving a problem because it'll strip it, of its, uh, of the kind of reputation that they're trying to build for it as being this great business, which is going to change and revolutionize the world.

You can't afford to talk about people getting excluded. You can't afford to talk about who these people are, who are getting excluded, the most marginalized. You can't talk about data and how data is being used to track people. It's called convergence here. So you, you don't have to have the ui, the other itself, database itself doesn't have to have all manner of information about people.

This is the ubiquity. You put the number everywhere and you can garner all that information. So it's getting the state, giving the state an ambition per, you know, as the state of being in control of for people, but internationally, globally. To [00:09:00] be a leader of these kinds of digital technologies, and you can only push this narrative if you are completely clear that you don't want to acknowledge the problems in it.

So the very interesting thing for me is you're having all these AI summits. I would've thought, I mean, when we discussed issues of this, of whatever nature we were working on, when we had conferences and meetings, it was. You know, to have it out amongst ourselves to see what is it and what is the position we are in, what should be done next.

But it seems now like it's only marketing all the way. You have to market that. This is great, that's great. And it's happening, like I said, in the middle of people saying, oh my God, ai, it could lead to all kinds of harms and all kinds of possibilities of human extinction. So to me it seems like there is a severe need for psychiatrists to enter this space and do something with people who are so completely disconnected, even with their own thoughts.

When the, uh, UID project, the other project was originally spoken about, in fact it was hardly spoken about. It was almost like, [00:10:00] I mean, there were, there were no big news reports on it. There was hardly anything. How we got to know about it was because they held meetings with what they call civil society actors.

Introduce us to the concept and I think basically to co-opt us into making the project work because we have access to certain kinds of people to whom they don't have necessarily any immediate access. For instance, migrant workers, people working with migrant workers, they said, if you can come in, if you can, you know, onboard onto this project, it will help.

So we didn't go into it with skepticism because we thought that maybe this will be another such where it help the poor. The first meeting that we had told us that the ambitions were very different, or you know, what the ambitions were, we didn't know, but this much was clear that the people who were working on the project really didn't know the poor at all, and it seemed like they were shooting from the shoulders of the poor.

We asked them questions, and the questions only it received irritation and answer. And you know, you, you have to investigate these things and the project started without a white [00:11:00] paper, blue, blue paper, green paper, whatever color paper you want. So nobody knew where this project, you know, what this was supposed to do, what it entitled people to, what it might, what are the risks that might be involved in it.

But very soon as the documents started emerging, for instance, there was a biometric standard committee, which produced a report, which was to say how they're going to adopt biometrics in the project. We realized that this is a project which is being rolled out on a national scale, but without really knowing either the science or the sociology of it.

And that made us ask many questions, and I think the reason that the skepticism grew. Is because we haven't received those answers still today, and the problems that were hypotheses in 2009, for instance, that old people, people work, you know, people working classes who are ma, who are doing manual work, people who work with chemicals, women, people who don't have other identities, which can help them identify themselves onto the system.

Uh, all the problems that we had anticipated, and the [00:12:00] question of course of convergence of data. The question of what will be done with the information that is collected about us, if private entities are going to have it, what is it that they'll be using it for? It is it all right for personal data to become widespread like this and being handed over to different people?

These were all hypotheses at, you know, we were not concerned they were concerns at that time, but over a period of time we have seen that it has in fact. Every one of the things that we thought would be a problem is indeed a problem.

Alix: We'll bring Usha back right at the end to share some final truths on what AI really means if you aren't a tech CEO.

Let's go to Audrey Tang, Taiwan's inaugural digital minister, who has firsthand experience with rolling out digital initiatives on a national scale, and the term we dig into is democratization.

Audrey: I'm curious about the opportunity for the Democratic system itself. To become much more attuned to the uncommon ground, the [00:13:00] surprising common ground, the uncommon ground was something that people recognize as newsworthy.

Indeed, it is a headline. If the adversaries in either international politics or in domestic politic. Do agree to, uh, do something together. This is usually headline news, so people do recognize that these things are valuable to have. The problem was that. Previously, the reliable way to get people to agree on the uncommon ground.

So for example, previously we know Citizens Assemblies work. If you randomly select 400 people out of a millions of population and put them in tables of 10. And have real conversations listening to one another, and then weave together their best arguments and then vote at the end of it. Actually, most people see that despite their partisan differences and ideological differences, they do agree on most of the things, most of the time with their neighbors because they had time.

To [00:14:00] nuance through their different positions and lived experiences. Recently, Google Jigsaw worked with the NAP Institute under we, the People to 50 experiments, and it turns out, uh, 2000 people, statistically a microcosm of the US population. Five from each congressional district. 97% of them agree on how good values should start at home.

Parents have the most important job of teaching true drug respect and kindness, and therefore, 96% of people say people should be judged by their character and action, not by appearance and backgrounds. And then it leads to how a strong society should be built. So. Almost if you read the consensus that is reached at a table, you would naturally think, okay, the US is not polarized at all.

People have a very clear vision. Sounds great. How to construct a nation together and and how to get there, right? There's no doubt that this effect exists. Lingers in the people who are part of that conversation. And Fish King's Lab tracked, for example, one year, uh, after such [00:15:00] America in one room conversation, people still think back and stay depolarized.

However, previously there was no easy way. For people to enjoy this kind of conversation as skill. Mm-hmm. At every community, at every school and so on. And also, even if they do enjoy such a conversation for people who were not part of this conversation, it's easy then to paint, uh, the 400 people or the 2000 people there as somewhat an outlier because in their mental model, it's us versus them.

Okay, so some of us managed to, you know, defect or something, but that's not me, right? So, so the, the repolarization effect is very strong. But now I think for the first time we have good enough, uh, social translation. So as long as we know a little bit of where you're coming from, your personal experiences, then language models can take.

This entire conversation transcript of how people reached this uncommon ground and have a tailor-made conversation with you in a one-to-one [00:16:00] fashion or one to few fashion, and really change people's mind to toward enjoying the uncommon ground in AI parlance. This is called Super Persuasion. Uh, and super persuasion is upon us.

So not only AI models, uh, pass the Turing test, that is to say people can no longer tell whether it's human or robot, but rather, um, changing people's political beliefs to the better in this case, or, uh, overcoming conspiracy theories. Now we do have open source tools like the Harbor Mass machine. That are super persuasive.

So I think, uh, it is now upon us all the democratic citizens, how are we going to use this super persuasive technology? Because if we do not use it to build the uncommon grounds, I'm 100% sure the organized fraud, the polarizers, the propagandists and so on, are going to use. Those super preservation tools, whether we collectively say, stop or not.

So we really, again, have no choice, but rather to use this to weave the social [00:17:00] fabric back. The key question for any government is, are you putting your citizens into the loop of colonial ai or are you taking it back and putting those AI systems in the loop of your Cary treating AI as a infrastructure for a human coordination.

Is more fast, fair, and fun than the colonial alternative. So this is the first frame that I like people to, to think. And in Taiwan we do not talk about like future extinction risk or something. Uh, we talk about the organized fraud that people are experiencing like right now here. And instead of saying, oh, let's make a kind of universal top down rule to enumerate the AI risk or whatever, we say, oh, let's use AI systems to help people cohere.

Agree in the here and now very quickly against the fake synthetic intimacy fraud, and [00:18:00] many other current day issues. And so after we came together last March in such a online alignments assembly, the legal drafts are published in May and everything is passed in July. And so throughout this year, for the whole year in Taiwan, there's just no deep fake ads anymore on social media because the people have drawn the red line together.

Arguably a large fire actually. But, uh, it's not, uh, the super fire, um, that consumed the entire civilization. But when the large fire happens, we did not just say, you know, that's the price of progress. We didn't say that, but rather we say, oh, let's light some campfire. Also using ai, it's also fire. But those campfire brings people together.

They agree to draw the red lines for the firewall against the synthetic fraud fire of those social media companies. And now we coexist, uh, just fine with, um, synthetic media on advertisement because these are very useful. People are going to use that for short clips and things like that, [00:19:00] but it does not lead to organized fraud in Taiwan.

Alix: So do you think the summit's framing around equitable distribution of compute and access to technologies is um. I dunno. Do you think it makes sense as a priority or a policy focus?

Audrey: Well, I think it's a necessary condition, right? Taiwan, after all is the home to the original personal computing revolution.

Personal computing, of course, relies on a computer at every desktop to paraphrase buildings, uh, or to quotes. And treating the necessary condition as the sole focus is quite dangerous. Mm. Right. Because if you only have the local compute, if you only have the local access to compute, then it is still a form of digital colonialism.

So a country okay. Has powerful service chips, uh, made in Taiwan. But the, the model, I mean, both the AI model but also the pipeline, the governance model that produce [00:20:00] those AI model are still controlled, uh, with the values of Silicon Valley or of Beijing. Then you have not democratized power. Mm-hmm. You, you have just distributed the terminals and the data extraction facilities.

Of a centralized authority. Actually, maybe even worse, because without decentralized compute, there's no way to collect that much realtime data. And so if you just distribute compute and you give up the local alignment of such compute, then you give up the sovereignty of alignment. So it may look like you have data sovereignty or compute sovereignty, but the sovereignty of alignment, uh, is, is gone.

And so I think again, instead of putting your citizens, humans. In the loop of ai, we need to take those out and put AI into the human loop, into the loop of humanity.

Alix: Now let's jump in with Meredith Whitaker, who you likely know is the president of Signal, which is an app I hope is on your phone. But she's also a huge open source advocate.

Um, and democratization and openness are kind of inherently [00:21:00] linked. As Audrey said here, having distributed compute does not mean you're democratizing anything. And as Meredith will explain, openness and AI doesn't. Really level the playing field like at all.

Meredith: Open source AI model is useful. They can be useful.

Are they a challenge to the concentrated power of these companies? No. And even if you created an open source AI model completely outside that on a super computer with data you is got somewhere. But like ultimately all of that, it still wouldn't be a challenge 'cause it doesn't challenge the actual sort of.

The resources that are currently concentrated that in combination are necessary, that include distribution networks that include economies of scale, that include entrenched reach, that include the ability to define the tooling and the standards and yada, yada, yada, yada, yada. I don't think the large scale bigger is better.

Deep learning approach to AI is realistically separable from those big players. The entanglement [00:22:00] between the state and. The AI industry, both of which recognize these technologies as extraordinarily important tools of geopolitical dominance. So all of that tells me that, you know, we're not gonna have a one weird technical trick that just sort of unravels that we're talking about political and and political economic forces.

That even if we had technologies that were smaller, more nimble and better, there would still be a deep vested interest in maintaining the bigger is better narrative, because it is extraordinarily advantageous as a moat, a narrative moat around these. Significance, political and economic advantages.

However, I am not arguing, and I wanna make this totally clear, that the models and the documentation that are surrounding particularly the most open forms of AI are [00:23:00] bad in and of themselves. What I'm saying is they don't do the things many of their boosters or their adherence seem to. Think that they do and that claiming that they do these things confuses us and distracts us from the type of solutions that we need.

Being able to reuse a model can be very, very useful. Being able to, you know, use it on-prem, in a given industrial setting that isn't sending data to a cloud provider, may be necessary to maintain confidentiality. Being able to look at an open weights model and understand a little bit better. What might need to be tweaked for our use case.

You know, being able to examine data sheets or the dataset that was used for training or other affordances is very useful, and being able to kind of play with that and extend it is also very useful. All of those things are great, right? What they aren't is democratizing in any meaningful. And so, yes, by all means, [00:24:00] please.

But what I think we need to do as, as people who are technologically responsible and sort of grounded in a material understanding is disentangle the rhetorical halo, which is misplaced from what these things actually do, what the positive benefits of those might be in different contexts and what they fail to do, and we thus must solve otherwise.

What I'm saying about scale is that paradigm is not Democrat, sizeable. Yeah. You can't compete on those terms.

Alix: Yes.

Meredith: What I'm not saying is that a given large scale model that is open sourced and usable, you know, open sourced in the limitations that we talked about and usable by others. Doesn't have utility.

Alix: Oh, interesting.

Okay.

Meredith: Yeah. What the utility it doesn't have is a utility to level the playing field vis-a-vis AI dominance. The utility it could have is, you know, you can use it on-prem, you can extend it, you can do X, Y, [00:25:00] z. But it is not democratizing. It doesn't level the playing field, and it doesn't address the significant monopolistic advantages that have accrued to US firms on the one hand, and Chinese firms on the other, and cannot be undermined simply by sort of innovation or open washing.

Alix: You've talked about tech companies waving the rhetorical wand of openness to detract from concentrated power and wealth. So can you say a little bit about why you think there's this fixation on openness and why have systems that are in practice closed actually sort of tried to grab the mantle of openness within this space?

Meredith: There's an answer to this that I'll, I'll flash quickly at least. Ultimately interconnected networks, you know, communication networks or the internet require openness at some level. They require shared standards, shared protocols, shared operational norms [00:26:00] that allow the network to connect, to be interconnected, to interoperate.

And so, you know, there is a, there's a layer of this that just the internet. At many levels, at the lower levels of the stack at the TCP IP layer, and, and many of these standards is constituted via open standards and openly screwed up ways of doing things without which it would not be possible. What you have is a term open source that was applied specifically to software.

There were different licenses that were created around the concept of open source. You know, what can you do with open source software? You can reuse it. You can tweak it, you can modify it, but if you do modify it, you have to in under some of these licenses, provide those modifications back under the same terms so other people can use it.

These were [00:27:00] very precise and specific, and. Endlessly pedantically debated protocols around what open source meant in very specific contexts, and that applied to your software, which you could basically take a binary, take the package and rerun it on your machine. Your set of servers sort of reuse it.

Pretty self-explanatory, at least. In contrast to what we're talking about with ai, then we have the reemergence of this deep learning paradigm of AI that is effectively at the early 2010s, you had a recognition that in the context of huge amounts of data, which had sort of been secreted and accrued in the context of the platform, you know, business model.

In the context of huge amounts of infrastructure that was already built and calibrated to process and store this data. Early deep learning approaches, you know, approaches from the late eighties, early nineties were [00:28:00] really useful and could do new things when those resources were available. That kind of marks the.

Advent of this moment in ai. No, it wasn't 2023 when chat GPT came out. This has actually been a paradigm shift, a Bruin for over about 10 years here, and the key element there, the key novelty. That evoked this new AI moment was the presence of concentrated amounts of data that had not been available before, and powerful distributed computational systems to process that data, to train these models and then to perform inference or, you know, use these models.

So we need to keep that in mind. Then when you know, and I think it was, I don't know, 2023. 2024 open source AI becomes. A thing, right? At least rhetorically, there've been people working on more or less open models, et cetera. What you see there is effectively what we've called narrative arbitrage. Which is evoking [00:29:00] the concept of open source, which is understood in the context of software.

People have a a rough idea of what that means. It's reusable, it's more democratic. We can all kind of scrutinize it and tweak it. And that does decentralize power, arguably, right? Because you can, you can then sort of access that and do your thing with it. Applying that term to AI models and systems.

Without really explaining that. The difference here is that you still need huge amounts of data. You still need huge amounts of labor. You still need huge amounts of infrastructure, both to train and to deploy and use these models. And so the halo of democratizing ai, reducing concentration of power, increasing screw ability.

Is assumed by people who understand what open source means in the context of [00:30:00] software to apply to ai, when in fact those. Capabilities, those affordances, those virtues do not cleanly map onto AI and AI systems. And I'll pause here just to say a little something about Scability. Yes. You know, an open weights model can be screwable to some extent.

But again, these are non-deterministic statistical models. You cannot just cut them open and get a cross section that will tell you why this answer and not that why this determination and not that. And in the context of things like data poisoning attacks and the extraordinarily. Significant editorial decisions that go into defining the training corpus.

And then, you know, reinforcement learning with human feedback, which is kind of the labor intensive calibration process that gets these models prepared for, you know, kind of business use and prime time screw ability doesn't give you the kind of insights in the context of an AI model that it would in the context of a software [00:31:00] package.

I would almost argue that. Half of the technical terms that I came up using in very precise ways are now just vibes based, evocations of abstractions that the user seems to have either forgotten or never learned. Like we are living in a world where vibes and ideology and even a theological concept of what technology is, seems to have inoculated.

The population zombie virus like, and half of tech is just running around praying to a sky, God that lives on an Nvidia chip. So with that as the context of my answer, I think. We're in a situation given the, you know, history of the commercialization of the internet and the way that historical investments and you know, how that happened and the neoliberal zeitgeist that turned it over to the private market without [00:32:00] understanding that communication networks are naturally monopolistic and thus enabled a monopoly to form in the US that, you know, kind of spread.

In terms of setting the standards, the terms and the infrastructural dominance of these platform companies in the US that sort of emerged out of the commercialization of the internet. We're in a situation in which the polls of AI power exist. You have two polls in ai and this is, you know, China and the us.

This is well known. This is not a secret, right? And you, you have other countries that have, or jurisdictions that have some. Capabilities, right? You have talents, or you might have data from, you know, various vertically integrated industries that collect data. You might have a handful of supercomputers. You have bits and bobs of the resources needed to create and deploy ai, but you don't have the full suite.

You don't have hyperscaler infrastructure that spans the globe, [00:33:00] which right now, you know. About three companies have about 70% of the infrastructure ma market, the cloud market. Those are Amazon, Google, and Microsoft. You don't have the distribution networks, whether that be licensing cloud APIs through your infrastructure that give people access to your AI models and and let you make money off of them, or integrating these models into your social media platforms, which is a vehicle both for distribution of ai, exposing that to.

People who they call users or the continual collection and creation of more data purportedly about these people and the context of their, their use of your service that can be used to feed the AI model and continually refresh it and you don't have, you know, kind of the legacy businesses, the marketplaces, the platforms, the advertising networks that can serve to cross subsidize the extraordinarily expensive.

[00:34:00] Infrastructure and compute and energy requirements needed to train these models. So there is an anxiety, and rightly so among anyone who's not a major AI company or the US or Chinese state around where they stand in relationship to this powerful new but also old set of technologies and a desire to figure out how you can have a piece of that as well.

I think that is matched then with what I've called intellectual shame, a kind of hesitation, particularly among powerful men that I see constantly where the same people who would rip apart a bad p and l document in a board meeting with incisive questions fail to ask a single question when presented with.

Fantastical claims about AI and its capabilities? I think largely because there [00:35:00] is a, a fear of looking non-technical, looking ignorant, looking behind the curve. There's sort of a FOMO driven juggernaut that is dictating the need to adopt ai. We don't know where, we don't know how, we're not sure how we measure it, but if we don't have an AI strategy, we are behind.

So. That sets the table. So why are we seeing openness invoked? Well, again, I think in the context of this failure to ask the right questions in the context of this deep anxiety in the context of a handful of US-based companies, Chinese as well, but US is, is really dominant here, at least in the West, who have control over infrastructure.

Who have control over platforms that are dominating our information ecosystem and who increasingly have a control over what I'll call kind of bureaucratic decision making via owning the models that are then licensed out to governments and others via API contracts or, or what have you. The sort of, you know, AI as a service.

There's anxiety about sovereignty. There's anxiety about like what is a [00:36:00] state if these things are sort of seeded to centralized corporate actors. How do we get a piece of this? How do we, you know, maintain our, our status, our parody with these states that do have these, the two states that do have these?

And terms like openness, sound pretty appealing, right? Well, if it's open, if it's democratic, if it, it's kind of looks and smells like open source software, that means we can have a piece as well. That means we can, you know, take an open model and we can use it. And I think there's two things going on. One, there's probably some honest misunderstanding because God knows no one's asking questions and like there's a, you know, there's sort of a lack of grounded material basis for this, for a lot of the claims that are being made about ai, you know, in government and elsewhere.

And I think there is just a, I would say, kind of weak political opportunism where you don't want to look like a loser. You don't want to look like you don't have a solution to this. You'd like to spend the [00:37:00] remainder of your time in office or your time in power being a person who won, not a person who lost, so you know, quickly trading on a little bit of rhetorical exaggeration, you know, may feel or be in the short term.

More advantageous for your own political positioning allow you to sidle up next to those in power and sort of pretend you're, you're with the party as opposed to going direct with them and saying, you know, this form of openness is not actually providing the affordances that we need. This form of sovereignty is actually.

Sovereign, this form of independence is actually not independent, et cetera.

Alix: It's interesting how all of these discussions ended up landing on the kind of flagrant political opportunism happening around AI in lieu of any real scrutiny or evidence base. So let's go back to ABA, who will build on this and talk about how tagging quote AI for good on the end of a concept can make it really hard to criticize

Abeba: [00:38:00] the AI for social good framing in air for social good initiatives kind of.

Temper, kind of bring in, look, we're doing something good. Everything is not bad. Everything about AI is not bad and you can't criticize us kind of attitude. But again, I think maybe to deny listener, a lot of those ideas might be convincing because it's easy to believe that. To buy into a lot of AI is like magic.

It can do anything and everything, uh, narrative. And if you have bought into that, it's then easy to accept that a lot of the complex social, political economics related problems can be solved by ai. So at these events, these AI for social good initiatives kind of. Normalize a much positive attitude towards, towards AI and [00:39:00] those that are building AI systems.

Yeah. Which is again, problematic. I don't know. You have to really delve into what is the AI that we are, uh, using, we're trying to deploy, and what exactly is the problem or the social good that we're trying to do is with these AI systems. It's only when you start asking those kind of questions. You start to realize that a lot of the claims around AI for good kind of start to crumble and don't start, don't stand up to, to scrutiny, but again, kind of spreading AI for good initiatives in these spaces.

We will further normalize AI as a good thing, and we, again, for the most part, people will actually buy and think that, you know, these AI systems, this AI for social good initiatives might actually be doing good. So it's kind of doing a lot of damage, both in terms of like people's consciousness, people's knowledge and understanding of what AI is.[00:40:00]

I am not that hopeful for the upcoming summit. My expectation, my intuition, is that they're going to use a lot of like the global south, the global majority narrative. Again, to put people from the global south, especially people again, you know, that tends to be at the margins of society. Again, it's those people that will be put at, at.

Further harm that will be further disadvantaged by these narratives and by the uncritical adoption of AI systems. So I am not somewhat engaged with the upcoming, uh, summit maybe because, you know, the bits and pieces of what I see hasn't given me any hope. Hasn't given me anything much positive to anticipate towards.

Yeah. But I suspect that, you know, the, the, the global South's, uh, AI for social goods narrative will dominate. And I would be surprised if [00:41:00] that narrative is accompanied by demands for empirical evidence or any critical thinking.

Alix: And finally, here's Usha again, explaining that AI can, in fact never be good because it's built on so much exploitation.

Usha: Many people have said this, that it's neither artificial nor intelligence, and I think we need to take that a little seriously. But you know, what I find really strange is that all these people who are creating ai, who are using all the data that they can scrape from anywhere, first of all, there's a whole amount of, you know, the foundation is unethical.

So I've always been wondering whether it's possible to have something ethical on an analytical foundation. I'm not sure. I don't think so, but I'm not sure. But what amazes me is that every one of them has said that AI could lead to human extinction. It's not even like it's one or two people here or there.

Thousands of them are signing onto documents saying that it could lead to human [00:42:00] extinction. There are, uh, people like, uh, Gabriel, who've been saying, why are you waiting for some human extinction later? You have it happening now in all kinds of ways, and discrimination and exclusion and non-recognition.

And so I remember reading a book where they were talking about facial recognition technology and how they had, you know, one of the companies had given it to the police, and the police were not reading it right? Because I mean, they were reading it right, but the technology was not reading it right for them.

And the company basically said, yeah, yeah, that's true. That could be a problem, but all we need is everyone get onto the database and then they'll be able to do a good job of it. So this idea that all of us should become survival to a technology and that that ity will produce peace and calm and happiness for us is such a, even in mythology you don't find this.

So I don't know how in reality we are expected to accept this. The other thing is, I think a lot of the ways in which people have lived through the centuries, it's towards [00:43:00] freedom and it's towards control over our own lives. We have actually created things like patriarchy to say that no, no, women shouldn't feel comfortable being free.

And the women's movement has been about saying, just get off our backs. We need to get on with our lives. So it's a battle that we are fighting all the time for. It's a kind of self-realization that we, even if I make mistakes, let it be my own mistakes. Let it not be somebody else's stupid mistakes.

Alix: Thank you to Usha, Abe, Meredith and Audrey for coming on the show. Don't forget to check back next week for the final episode in this series with Naomi Klein Chinasa Okolo. Timnit Gebru and Nikhil Dey. Um, and also if you haven't listened to last week's, please do 'cause there's a lot more of this stellar analysis from another group of people who take apart four other concepts.

So do check that out. Thank you to Georgia Iacovou and Sarah Myles. For producing this episode and the series, um, also to Marion Wellington and Zoe Trout for [00:44:00] navigating so much over the last few weeks to push out all this amazing work and also to our partners, AI Now and Opti Institute, who have been very fun to work with and sort of figure out how to intellectually wrangle something like this.

And we appreciate you listening, uh, and for all the folks that have shared a lot of these ideas and materials in advance of the summit as kind of pre-and for Delhi, and we will see you next week.