AI for All Tomorrows

How is AI changing the state of war?

In this educational episode we sit down with Atay Kozlovski and Ayca Ariyoruk to discuss what autonomous weapons are, how they are being used, and what we should think about it. 
 

What is AI for All Tomorrows?

In an uncertain world where AI technology is disconnecting us more and more each day, we are dreaming of a world where technology connects us. The premise of this podcast of simple: let's talk to people and imagine the most hopeful and idealistic futures for technologies that connect and do good in the world.

AI-tocracy (00:03.31)
We get to do this fun waiting a second while the LinkedIn connects, which is always a pleasure. All right.

AI-tocracy (00:15.466)
All right, and we should be good to go.

AI-tocracy (00:23.222)
Welcome to AI Talker See live at the intersection where AI and emerging technology meets power. And today we're talking with Atai and Aicha about autonomous weapons systems, which can be a very loaded term with a lot of different political technological connotations. And so our disclaimer for these conversations as usual, that the opinions of guests do not necessarily reflect their employers.

or a itocracy. However, we're so excited to have Atai and Aicha here today. we're gonna start with something that is not necessarily to do with the topic today, but might be a good lead in, which is the question we ask everyone. Let's start with you, Atai. What is something inspirational that you're finding in the world or maybe even in your life right now?

Atay (01:14.525)
Yeah, so I was thinking about this question when you sent it to me and I was saying to myself, maybe I should do a Larry David and sort of answer that there's nothing inspirational and I hate everything. I'm finding it terribly difficult to find a positive light on things these days. I guess family, friends, right? The very fact that you have people that you love and care for always.

makes you grounded in that sense. think I try to look at my children and my parents and family and wife and right so maybe that's the best answer I can give.

AI-tocracy (01:47.384)
That's a great answer. What about you Aicha?

Ayca Ariyoruk (01:50.982)
I said, it's kind of a hard time to look for fine inspirations. I I study AI risks and...

Ayca Ariyoruk (02:00.662)
But I find inspiration on conversations like this that we're about to have right now.

together and I get to meet with Atai and we're gonna...

Ayca Ariyoruk (02:15.83)
sometimes misunderstood topics and I find inspiration in the conversation that I had with the...

Ayca Ariyoruk (02:26.322)
today that was inspirational so I look for inspiration in small connections that I make with people that gives me joy and hope.

AI-tocracy (02:37.134)
Well, that's wonderful. Well, maybe we can keep with the joy and hope element. A quick note, a tie on noise. When you're typing, it's coming through. So if you could just mute when you're not talking, that would be awesome as well. So we're going to just dive into this topic and see where we go. There's a lot of different areas, a lot of different places that we can go. But...

Atay (02:49.609)
No worries. Thanks.

AI-tocracy (03:04.0)
I do wanna start just by positioning us, the people on the call for where we're situated in this conversation. And so maybe, Attai, we can start with you again of who are you and how are you approaching the topic of lethal autonomous weapon systems or just autonomous weapon systems?

Atay (03:23.699)
Yeah, so.

What do I do? I'm a postdoctoral researcher. I'm currently a visiting fellow in TU Delft, and I'm situated in the lab that's called the Social Technical AI Systems Lab. And I'm doing research on theories of control, meaningful human control over autonomous systems, weapon systems, civilian applications. That's sort of my research area. My background is in philosophy. I'm a philosopher. My dissertation was in the field of normative ethics, so I came from moral philosophy. And then slowly over the past,

three, four years started applying these more to technological domains. And my interest in military systems comes from my personal background as a soldier and an officer. I served in the IDF for about five years, and so have that as a personal life story. And then some 10 years after being released from the service, I became interested again in military technology in more critical way than the naive 18-year-old who's sent there and sort of experiences.

things for the first time and looking at it from a more mature perspective, you see things completely different. Maybe we'll touch in general what I think about sending young soldiers or young

kids to fight and do their work and how that relates to autonomous systems. So that's my interest. It's mostly on the ethical side of it and sort of how do we take theories, moral theory, but also decision theory, which was what I specialized in, and apply these to systems in which we try to use, hopefully beneficially, to

Atay (04:56.561)
right, gain the benefit from what they offer while reducing the harms that might come about. And as we discuss this in the context of automation in weapons and in the military, we'll see whether that's even a possibility or is that a fantasy, right? So that's sort of, hopefully we'll talk about that.

AI-tocracy (05:12.811)
I check what about you.

Ayca Ariyoruk (05:31.542)
which is different from other type of AI index that does comparative studies on countries in the sense that we're not tracking investments or research, compute or technical safety standards. We're just looking at.

Ayca Ariyoruk (06:01.43)
three or years so we've been tracking these developments and we've noticed that there is now a great number of policies and frameworks and treaties and declarations and coming out and

Ayca Ariyoruk (06:21.27)
exceptions and do not touch on the AI used in the military. So that sort of got our attention at which point we started tracking and that took us to the negotiations at the United Nations that started a while back about 10 years ago.

Ayca Ariyoruk (06:44.694)
treaty that binds countries with rules and regulations around how to

Ayca Ariyoruk (06:57.174)
And we start sort of following the negotiations and looking how things are going and then we

Ayca Ariyoruk (07:13.326)
that sort of expands into the use of AI in the military that doesn't sort of constrain the risks and risks from these weapons systems to kinetic force as in just in the weapon. So that's what brought us to this podcast. But my background is in political science. I did my graduate studies and more in peace studies. I studied diplomacy.

Previously I worked for Think tanks in DC, in New York.

Ayca Ariyoruk (07:57.198)
on behalf of the civil society to develop around nuclear nonproliferation and the settlement treaties, the efforts around that. So it's sort of like a full circle for me and now looking at, I'm looking at...

Ayca Ariyoruk (08:13.992)
spread of these weapons in a way that is not going to help bring about...

AI-tocracy (08:22.516)
a quick technical pause. So for one reason or another, it is not streaming on YouTube currently, but yet we do have three people in the waiting room of the YouTube stream that it was supposed to be in. So that's interesting. And so I'm trying to figure out the best way to handle that, especially if you both had people.

to be.

Is that something that you think is an urgent thing to figure out or fix?

Atay (08:52.201)
It's a recording, I think it's fine to just use that and that's enough. We'll just send that out. And that's my opinion, of course.

AI-tocracy (09:00.692)
Okay, all right, great. I just wanted to make sure I wasn't offending anyone by saying, hey, there's this thing. It is a little bit of a bummer that people are waiting but are not here. And maybe I'll do some clicking on the back end over this next question, but I want to be present for this. So.

So one thing in my research going into this episode was the definition of what we even mean when we're talking about autonomous weapon systems. Again, earlier I used the term lethal autonomous weapon systems, autonomous weapon systems. Some people seem to treat these terms as interchangeable and some people are definitely contesting what terms we use when we draw this circle around this

that we're talking about today. And so I'd like to do a little bit of definitional work as we're jumping in and just to figure out for you both how you're drawing the circle around or bounding up this kind of technology altogether. So Aicha, let's start with you of how do you think about autonomous weapons systems or sometimes we call them laws if they're lethal autonomous weapons systems as maybe a definitional starting point.

Ayca Ariyoruk (10:18.871)
has been contentious from the beginning and that's one of the things that these hundreds of diplomats...

Ayca Ariyoruk (10:35.254)
In terms of the negotiations, there's a working definition and that goes by saying any system once activated can engage targets without further intervention from a human operator. That's sort of the working definition. But in practice, it is hard to...

It is hard to pin it down if the goal is to establish some understanding and norms on how we use it and not only how we use it, how we sell and deploy.

Ayca Ariyoruk (11:21.748)
control but that

after activation, that is the way that you use it.

AI-tocracy (11:38.324)
I you what would you add.

Atay (11:42.121)
Yeah, I think that, you know, it depends on what perspective you're coming from. If you're talking from a regulatory standpoint, right, you need to use certain terms. If you're talking about an ethical standpoint or an ethical analysis, I think that the functionality of the system is sort of what is at the core. What can it do? What it can't do? What kind of limitations do we put on it? What are expectations from it? So I don't know how to get into the real definition of it. I think sort of there's a consensus that there was this American document that sort of defined, as Aisha mentioned, sort of

Once it's activated, it can do certain things on its own.

target, identify, track and attack, sort of those key issues. The issue of lethality is one that's being debated because proponents of regulation are saying, well, that's not the core aspect here, right? Because the system could be non-lethal and still cause tremendous amounts of harm. So that's sort of the debate around the use of lethal, non-lethal. From a philosopher's perspective, I think that the issue is sort of, we'll get into that later, but more

use cases are we putting these systems into, how are we developing them, how do we see them from a social technical perspective. So sort of those are the areas that I try and focus on more.

AI-tocracy (12:54.538)
Yeah, so let's do that. I think that's the important part here is how is this definition being operationalized, both politically and then also technologically, and it sounds like ethically. So, Atai, could you talk about some of the use cases right now that we're seeing? I think this is a topic that's on a lot of people's minds right now and maybe causing a lot of fear or consternation among us.

And so could you just ground the conversation a bit?

Atay (13:25.641)
Sure, I can try to do that in the simplest possible way, which I think the first thing is to distinguish between our imagination and reality, and there's a big gap between those two. So when we talk about lethal autonomous weapon systems, or how it's called, often killer robots, right, we envision these type of Terminator style, or the Sentinels from The Matrix, right, these types of really high sophisticated tools. In actuality, it's more typical weapon systems that can be just operating

without human intervention, so vehicles, drones, weapons systems directly, turrets and so on and so forth. And the idea of the autonomous nature of it is that it has certain capabilities using machine learning tools, using software advanced software, that it can do a lot of things that humans used to have to do in order to accelerate the kill chain. So one example would be targeting systems, right? We can use sophisticated algorithms or to create a bank of targets

that then the military operator can easily select and activate. So that would be like a semi-autonomous system. And then we have typically in the drone side of things, so we have what are known as loitering munitions. So these are types of systems that you send up for an extended amount of time and they are sort of pre-trained to identify specific kinds of targets, a vehicle, a tank, a radar facility, and it will just wander around in the air for five, six, seven hours and just...

dive bomb onto that target once it has met certain parameters that it was designed to identify. So those are mostly the use cases that we're seeing with the weapon systems. And then if you want to, we can also get it to the non-weapon side of AI, which is basically all these recommendation systems that are being used.

And I think a very, very helpful way to distinguish the two kinds and sort of to make sense of all these different use cases is to think that there are two types of delegations that we can do. The first is to delegate tasks and operations. And this is kind of a model of AI warfare that's often referred to as Centaur warfare. So imagine the mythical creature, the Centaur, where he the legs of the animal and the head of a human, right? So the human tells the legs what to do, the human tells

Atay (15:43.699)
the AI what to do the AI goes and does it. The flip side of that would be the Minotaur style of warfare where the head is the head of a monster on top of the beast of human so in this case the AI does the cognitive tasks whereas the humans do the legwork and so those are the two extremes of these types of delegations you're either delegating the tasks of the weapons the drones the systems or you're delegating the cognition of it who to attack who is the right target right and what how which tactics to deploy I think that

sort of the two edges of this technology there are a lot of things that are in between these two and we can go to specific view cases later on and expand on them and what we're actually seeing in the field.

AI-tocracy (16:23.564)
Aicha, are there other examples that come to mind for you as when you when you think about this topic that are kind of the quintessential examples of the use of these technologies?

Ayca Ariyoruk (16:44.68)
AI can be the cognitive side and AI can then be the execution and the delivered side and I think that's the scary part that we're seeing in this last AI boom.

seeing in terms of the tech companies, the tech industry and the defense industry converging with the government. So there is a sort of like a really big convergence of power.

Ayca Ariyoruk (17:14.236)
Again, these are technologies that are adding up on each other. You have data, you have robotics, have surveillance, have autonomy.

all of that sort of different technologies layering up with each other. So you you use predictive analytics that are not scientific to come up with to generate target recommendations and then you can attach that to either a conventional weapon or an unconventional autonomous weapon, precise weapon either or and you can use

Ayca Ariyoruk (17:57.162)
military threat assessments and war plans and sort of integrated into the communication and intelligence capabilities. So once a commander, a human commander is reaching a decision, how much of that decision is human made and how much of it is influenced by AI? So these are the kind of like the bigger questions obviously are very important for.

Ayca Ariyoruk (18:31.038)
In terms of the use cases, think we can, you know, have, can dig.

Ayca Ariyoruk (18:40.384)
some made some concerns and most recently we have the operation spider web that have

Ayca Ariyoruk (18:50.934)
here.

Ayca Ariyoruk (18:54.694)
of drones in multiple trucks into the deep airfields of Russia and bombed,

Ayca Ariyoruk (19:11.462)
bombers that were actually capable of carrying nuclear warheads. So this is a very big blow to Russian military assets. could be, I mean, if from sort of like humanitarian law point of view, was no civilians were killed. was these apparently drones were trained on strictly on military assets, strictly on hardware, strictly on weapons systems.

humans. In that sense, it's a perfect execution in line with international humanitarian law. But on the other hand, it's a significant escalation.

Ayca Ariyoruk (19:57.302)
here again.

Atay (20:08.681)
I have sort two thoughts that are coming from what you're saying. So one, think is a very interesting issue that comes up a lot and it's our concern that as we automate these systems and increase the autonomous capabilities, we're sort of theoretically reducing the risks to our own soldiers. And the concern around that, while that might be a very positive thing, which we can touch on later, right, less of the good guys will be killed in quotation mark, which is the main motivation for why we want autonomous capabilities.

AI-tocracy (20:35.509)
Hmm.

Atay (20:38.635)
On the other hand, the concern is that we're lowering the threshold for escalation for war. So it's sort of, you know, as it becomes easier and cheaper and less costly for our side to commit acts of violence via technological means, perhaps we are creating an escalation in the way that wars begin. So that's one consideration that we need to do. The second issue that comes to my mind is knowledge gap. Because of the military secrecy around these technologies,

And because they're often deployed in war zones, we have such a huge knowledge gap as to what is actually happening in the field. A lot of our knowledge is either based on social media reports or on journalistic investigations. Very little is coming out from governments, very little is coming out from military reports, from international organizations, so on and so forth. And that creates a lot of speculation. To date, I think that if you ask most researchers, no one will definitively

confirm that there has been an autonomous weapon system that has killed an individual. I think we do not have actual confirmation that these systems have been used and if they have it's been very rare and to very few occurrences.

AI-tocracy (21:45.483)
Mm.

Atay (21:57.013)
Nevertheless, we have lot of speculation that many of these drone systems have capabilities that amount to autonomous levels. And it might be just a matter of switching it on or off. So we have a huge knowledge gap. That's why I always try to be quite cautious. I think we need to sort of be very concerned with where this might go. And I was sort of talking about the convergence of both the cognitive and the physical. And I think that's sort of the worst case scenario that we are imagining. And as from a regulatory perspective,

think we need to be extremely, extremely risk-averse in this sense. The goal is to not let innovators and people who invent tools and toys experiment in the field with them, just to see what we can do or can't do. So even if we have a knowledge gap, I think in and of itself that tells us the cautious and the cautious nature we need to approach this issue.

AI-tocracy (22:51.698)
Atae, just to do a quick clarification, when you're saying that there's a knowledge gap, there's speculation because there's not official statements being put out by people about these systems in use, or what is causing that knowledge gap in your mind?

Atay (23:08.009)
So one, these are new technologies that are constantly developing. A lot of it is being done grassroots, it's going bottom up rather than new types of technology that are slowly being implemented by, know, if you look at what's happening, for instance, in Israel, right, the Israeli Gaza war, there was this horrific attack in October 7th.

And then the army sort of began to deploy all its tools and all its capabilities, some of them in test phases, some of them untested completely. Then you suddenly have these things that you need to respond to. So there's a lot of issue that it's not going through a process of development, testing, evaluation, so on and so forth. So we have little documentation about that. Secondly, there's military secrecy just. And finally, from a regulatory standpoint, as Aicha mentioned before, if we think of the AI Act, perhaps as one of the most progressive

sort of regulations limiting AI capabilities. Military systems are excluded from the AI Act, right? They do not refer to military technologies at all. And this is because regulators are very, very careful in what they can and cannot pass when it comes to military technology. So all of these issues sort of make it...

difficult for us to know what's happening in the field, to have actual data that we can analyze. And a lot of it comes from second sources or from hearsay and time will tell hopefully and we'll collect more data but I think at this point in time there's a lot of fog still.

Ayca Ariyoruk (24:34.144)
I think it's precisely for that reason, the lack of transparency and need to have transparency and accountability during times of war is why we are extremely alarmed by using AI in military. So if the government...

In democracies we need to know what's going on, right? In democracies we need to know why...

Ayca Ariyoruk (25:02.71)
payers money are being spent, participates in these decisions and they have a role to play. And that's why democracies have rules for war because democracies tend not

Ayca Ariyoruk (25:19.222)
sort of domestic and international rules and systems where we can resolve our conflicts peacefully. We have these systems, so that's why we don't fight. So there's a whole system for peace. But what we are seeing right now is really this return to what I think of it as the political science of the caveman era.

If you want peace, prepare for war positioning. As a matter of fact, it is the official U.S. policy right now.

Ayca Ariyoruk (25:56.63)
which says the...

the policy of the United States to accelerate defense procurement and revitalize the defense industrial base to restore peace through strength. This whole idea about peace through strength, number one, true.

military technology, military defense industry. it's, you know, I'm not a military person, but I work with military people and I know it's very expensive to upkeep and you have to upkeep it. And yes, we need to modernize it. But

The whole idea that attracted us to these lethal autonomous weapons systems or drones are that they're cheaper. Just like Ukraine, with what cost maybe a thousand or a thousand five hundred dollar drones can take down tanks and aircrafts that cost hundreds of thousands of dollars. So this just says, are we gonna outgun?

AI-tocracy (26:56.287)
Mm-hmm.

Ayca Ariyoruk (27:04.968)
If we are setting ourselves apart, how are we going to set ourselves apart from our adversaries? Basically, it's about why are we fighting? Why are

Ayca Ariyoruk (27:16.438)
If they can outspend, if they don't need to be spending a lot of money to...

Ayca Ariyoruk (27:30.336)
So.

Once we remove the accountability, we can't decide who said what. It becomes easier during the war.

Ayca Ariyoruk (27:43.51)
setting up ourselves for failure then we are setting ourselves for more conflict we're going to set a bad example to a lot of bad guys

Ayca Ariyoruk (27:58.022)
weapons is one

America to fight these wars.

Ayca Ariyoruk (28:10.336)
fight wars in a way that reflects our values, then we have to ensure accountability. When there is autonomy in the decision making...

Ayca Ariyoruk (28:28.214)
That's why we need to some established norms around how we use autonomy.

AI-tocracy (28:33.673)
Yeah, I'm struck by your use of we and our in that. And as you were saying that, I was thinking, okay, but you know, who, right? Whose values? And this is something that I cover a lot in my research in a very different context, but we're no longer living in the world that was quote unquote, re-stabilized after World War II. And there's a lot of democracies that are...

becoming less democratic and less democracies. And we're seeing various other political changes and upheavals happening around the world. And so I'm wondering maybe if we can put this conversation in a, I guess have a broader conversation about how these systems are developing or are being deployed in a political world that is changing.

very rapidly and there are some folks that might be very pro value-based regulation and some folks that are very much, no, let's go and go to Palantir, for example, and just have whatever technologies that will move fastest, that are the cheapest, and that will allow us to cause perhaps the most devastation that will allow us, whatever quote unquote country that we're in, to win. So, Atai, that's a big, perhaps very loaded question, but could you?

contextualize some of this a little bit in a broader sense.

Atay (29:59.475)
Yeah, I think the best place to start is with the positive pitch for why we want autonomous systems. At the first place, what is driving us towards it? Well...

It's also complicated, but think at its essence there's this idea, and let's assume good actors, right? Let's give the benefit of the doubt. We have this technology. Let's see, can we do anything good with it? Well, first of all, I think there's this drive, let's protect our soldiers, right? And as someone who had experienced this life of being in the military,

There's no politician that can be re-elected if he comes to a mother and tells her, you know what, I'm gonna send your son to die rather than sending a robot, right? So that's a huge, huge issue politically, but also morally, right? If I can prevent death, that's an excellent thing. So that's the one side, defend our side. The other side is the dream.

These systems will provide more objectivity because they'll analyse data. These systems will provide more precision because they're accurate. These systems will be fairer because they don't have all of human biases. These are all the tropes that we hear from anyone who's trying to present the positive side of autonomous weapons. So that's sort of what we're competing against. And it's important to notice, and there is strength there. I think I find a lot of appeal in many of these issues. And I think when we come to evaluate them, we have to sort of ask what is fantasy and what is real?

If that is the ideal, is that what we get when we actually use these systems? And to that I think the resounding answer is no. We get something drastically different. And so when we come to think about it from an ethical perspective, right, what are the counterbalancing issues that we need to consider? There are typically three kinds of categories that we look at. One, we ask about questions of compliance with IHLs, or International Humanitarian Law. Can these systems follow the rules that we...

Atay (31:51.022)
as a global society have decided that

war needs to be engaged. Now, a lot of people will say, there are no rules in war, war is the fight of all against all. But that's not true, right? We've set standards and rules and from a moral perspective, we have this strong intuition that there are some weapons that should not be used. Nuclear weapons, chemical gas, you shouldn't desecrate bodies for no reason, right? We have these strong deontological intuitions where we accept limitations on the use of force. So the first question, can these systems comply with

The second question is how will these systems sort of destroy the responsibility questions that we expect human to have? Can we hold anyone accountable when an autonomous system makes a mistake? And who is that person? And the problem here is often is that it's known as the problem of many hands. That there are too many hands sort of pushing and shaping these systems that we don't know which hand to grab and say this is the bad actor. Too many people involved and so no one is to blame. That's a risk.

The third sort of group of critiques that we have are questions of dehumanization. Are we essentially doing something undignified and something that we should reject from a principled ground as when we allow algorithms to decide who lives and who dies? There's no more human judgment involved, there's no more knowledge of what it means to be in the field and to be a victim of these types of issues. We're not treating even our enemies as humans.

Ferdinand philosopher who deals a lot in this issue, wrote a very powerful piece on this. said that when we use autonomous weapons, we treat our enemies as if they were vermin.

Atay (33:34.747)
So it's the epitome of dehumanization according to Sparrow and to other objectives. So we have those three sort of groups of critiques that we need to balance those against the potential benefits. And as we dig deeper into those critiques, I think they pose fundamental problems to this technology. Because I just mentioned that it's cheaper, right? But I think the problem goes deeper than that. In that...

We have this vision of what we are getting, but these tools are good for something very specific. They cannot be used in a great variety of ways. What they are very, very good at, and I had to mention this, is pace, scale, and once we adopt these systems, are radically changing the way war is conducted.

we are incorporating sort of algorithmically oriented decisions which are fundamentally different from how we as humans approach war. So the shift is quite radical and then as an ethicist what you come and say, well, let's look at these sides, let's balance them out. Is there an optimal solution? I think it's an interesting philosophical question.

as a regulator and as a policy decision maker, I think you don't have the privilege of, let's consider all the values. You have to be very safe, you have to be very risk averse here, you have to be extremely, extremely protective of vulnerable individuals who will suffer under the use of these systems mostly. And so, yeah, it depends on what hat you're wearing, but hopefully that gave sort of...

sense of what the types of questions come up here. And we haven't even touched... I'm just talking too long so I'm trying to end. But we haven't touched on sort of the fundamental technological problems, which I won't go into but we can go into later.

AI-tocracy (35:23.184)
Yeah, So, Aicha, I'm going to throw it to you to respond in one second. But Aicha, I think you are pointing out something very particular about this technology, which is that almost by definition, lives will be lost by how it's deployed. Like sometimes when we have philosophical conversations about, say, AI or artificial general intelligence, there are direct impacts perhaps for how, you know, we're building a chatbot or something. But for these technologies, they're literally being designed

to support military warfare in a lot of cases, or at least being deployed in that context. And I do think that...

Atay (35:58.045)
Wars.

Atay (36:03.645)
No, no, I'm sorry for cutting you off, please.

AI-tocracy (36:05.512)
Yeah, and so I think there's a different set of stakes for how we discuss this even as ethicists, which again, I would love to get back to in a moment. But Aicha, I know we've covered a lot of ground here and it looked like you had some things to say about where we've gone here.

Ayca Ariyoruk (36:24.47)
was really happy that ATAI started with the promise of autonomous weapons systems. I think going back to 10 years when it was initially brought to our attention, there was some hope that, okay, we can reduce casualties to our soldiers and that we can conduct wars.

Ayca Ariyoruk (36:45.142)
seen from the examples and the emergence of new technologies, we saw that these promises were not true. Number one, as I mentioned, we don't want to dehumanize the war. Dehumanizing war actually means removing public consciousness in decisions and what's happening on the ground from the decisions of war. mean, that's just what the democratic, I am looking at from when you say, you know, when you say we, who are you talking about?

and people who believe in democratic values.

We need to have our pace and our pulse in the battlefield. We need to know why we're fighting. So that's number one. Number two, one of the things that I think that taught particularly the experience in Gaza, and I know Attai has studied this more, precision and accuracy are not the same thing. Autonomous weapons can be precise.

doesn't mean they're accurate. So they cannot be inaccurate in two ways. They can be trained on biased data and they can make errors if they're using say surveillance and they're identifying a target, they're matching a target whether either on a photo or biometrics or heat map or whatever they're using, they're prone to bias. But they will shoot to kill. They will not miss. So that's number one. Number two, a bias predictive analytics.

can generate kill targets. So this is on the question of autonomy. We think about autonomy as a weapon just going wrong and doing on its own thing. But autonomy also emerges from a huge scale that a human, no matter, think about a commander, how capable, how smart, how well trained, just cannot keep up because that's the whole reason we're having these systems is the scale. if there is a target recommendation that can...

Ayca Ariyoruk (38:46.504)
If there's a system that can generate target recommendations on the tens of thousands, which then becomes impossible for one human or two human to oversee and approve and...

Ayca Ariyoruk (39:03.67)
So I think when we think about autonomy, that's what we also need to think about, the scale and the speed that is just beyond our control. Now, if these are going to be beyond our control, then why are we developing?

that a tiger is like, there's not really true any autonomous weapons. All of these weapons right now is at human control. But if you're going to keep them in human control, then why do we have autonomous weapons? So I think these are sort of important questions that we need to tackle, especially it relates to how we have as democracies, we set ourselves as.

Ayca Ariyoruk (39:48.988)
of complying with international humanitarian law. There's some really practical steps that diplomats can take is to try to narrow down on weapons systems that train on human data and that target humans. You want to take down a tank? Fine. You want to take down a strategic bomber? Fine. But if you're going after personnel,

It is hard for these systems to distinguish. I mean, I saw this YouTube video of the...

Ayca Ariyoruk (40:25.182)
CEOs and saying why wouldn't we want to use AI to distinguish whether this bus, whether this is a tanker or this is a bus, a school bus. The question is not to distinguish between which one is a bus full of children and which one is a tanker. The question is, you don't need AI to do that anyway. The question is this school bus full of children.

Are they children or are they combatants? And how do you distinguish that? And you can now hinder, you know, how...

If you are, mean, accountability is the most important thing the way we...

agency than we...

to provide accountability and decision making.

AI-tocracy (41:18.068)
do you want to respond to that?

AI-tocracy (41:41.224)
Yes, that would be great. We are you are a little choppy right now So I'm not I'm not sure if maybe turning turning off your video would help or or something I just want to make sure that we're getting your your words and we can we can try it out and see See see what? A little bit. Yeah. Yeah, I think so. I think yeah a little bit of a lag, but I think you're good

AI-tocracy (45:12.228)
Ty we are losing you a little bit so I'm going to turn off your camera to see if that helps.

AI-tocracy (45:27.526)
Yes, I think a little bit. Would you begin your example again?

AI-tocracy (45:37.079)
That's right.

AI-tocracy (46:37.402)
I try anything you want to say in response to that and then I'm going to bump us up a level as we move towards closing the conversation.

Ayca Ariyoruk (46:43.542)
exactly as Atai said, and I think that's we're assuming that the system was 90 % accurate, even it's 90%. And that is also open to question in terms of how they tasted their accuracy. There is a way in which we can anticipate what's coming. The way some of these technologies, surveillance technologies, I mean.

The way that system gets there, it starts at home with the way that you collect data, with the way that you treat people. The moment you start overlooking individual freedoms and human rights, and you are chipping away from their rights just for the sake of military and security, and you're creating the surveillance system at home to build on these weapons.

Ayca Ariyoruk (47:31.914)
You know where it's going. You know the direction. That's why it's very alarming right now that...

Ayca Ariyoruk (47:40.943)
procurement in the public space in the government. have no

Ayca Ariyoruk (47:53.781)
in God.

Ayca Ariyoruk (47:58.152)
a year and two years ago, we need new ones.

we can then accelerate procurement. It is not an obstacle to modernize military. And I think that we have to separate these things. We don't have a question between do we modernize military technology or do we not? It's a question of how do we modernize this technology and in a way that reflects our values. If you're doing all

China because we are fighting China because they are an authoritarian country.

Ayca Ariyoruk (48:36.438)
We have to collect our data in

individual freedoms, right? We protect these things in the system, we form alliances, we attract supporters that agree with these values so we don't have to just invest in military muscle and just be strong to get our way, right? Or we become an authoritarian country in the process because we're trying to fight it.

It's just we just need to really unpack this whole policy of peace through position or prepare for peace through war, prepare for war. It just does not add up.

AI-tocracy (49:21.732)
Yeah, one of the things that really is interesting to me about the language that we use around laws or all, it's the word autonomous. And we've said the word autonomous a lot here and I'm an academic. like my theory and my AI ethics at a higher level as well. And so I'm just wondering, what do we make of that word autonomous? Maybe in a broader conversation around AI.

Atai, you are in the ethics and sometimes in the philosophy world, maybe all the time in the philosophy world. And so I'm wondering if you can say a bit about how these conversations with autonomous weapon systems are playing into broader narratives for autonomy and AI generally.

AI-tocracy (52:05.155)
Yeah, yeah, absolutely. Aicha, what do you think?

Ayca Ariyoruk (52:08.182)
I mean, I think it made complete sense. And I think that was a debate that has taken over a decade long.

Ayca Ariyoruk (52:21.428)
And I think we reached that point right now where we've sort of come to the same conclusion as you did. We need sort of practical ways to move forward in terms of how we're going to approach these weapons. We're living at a time with great geopolitical uncertainty. The security risks, the protections that were once there that could have deterred some of these countries that said, OK, we are protected. don't

Ayca Ariyoruk (52:57.332)
getting it white.

It is very important at this point the countries are producing or at the forefront of these technologies and there's only a handful of them to sort of try to do the right thing. And as center for digital policy.

Ayca Ariyoruk (53:17.782)
under the auspices of conventional weapons, but they should be considered under the auspices of weapons of mass destruction. Because just like, as I said, a single system can generate thousands of targetless that can kill indiscriminately with great harm to civilians. At the same time, thousands of the swarms, the loitering drones, can wipe out populations indiscriminately. These are all the qualities that belong to weapons.

Ayca Ariyoruk (53:52.894)
Okay.

AI-tocracy (55:08.996)
Yeah, well, and we are moving towards wrapping up, even though we can probably talk about this for days at a time, and perhaps in the future we will have a part two through 70, as this also continues to evolve. But I am, because we've covered a lot of ground, and because I am seeing this as a educational episode, at least it has been for me, I was wondering if, for each of you, in a sentence, two sentences, if you could just say what the main takeaway.

that you would have for viewers on this topic. If there was one thing that you wanted people listening to this episode to know about this topic, what would it be? Aicha, would you mind starting?

Ayca Ariyoruk (55:51.318)
think if there's one takeaway is number one, as you said, I hope it's educational. is for concern for every citizen. Everyone should be aware. Everyone should be really paying attention. I think we've come to a place where maybe we are desensitized through the news feed that we see on our phones around conflicts and sufferings, but we just cannot afford that. We have to be involved and we have to understand these things and not let ourselves be overwhelmed. They are not actually

too complex if you have a chance to opportunity to engage with them. I think it's pretty simple how we approach lethal autonomous weapon systems. It starts with our first and foremost policies around how we approach AI in the first place. How we approach data protection, how we approach privacy, how we approach...

how we have strong institutions to protect the rule of law, checks and balances and powers. So if a country is democratically aligned at home, the beginning...

decision-making it will reflect itself through the time in which it will reach to a point where it is Come to that point and the war is lost resort number one. It has to be lost resort to be right to the legal

what is the last resort and how that war is conducted. it just sort of like, if you make a meal, it all goes into the ingredient. So by the time it reached the defense, it reached our foreign, it's now being used to execute our foreign policy objectives. We have a better system at home and that starts with guidelines, norms, rules around how we use AI in a way that reflects our values. We have to protect that in general.

Ayca Ariyoruk (57:35.446)
That's how we approach the lethal autonomous weapon systems. And specifically on that, we have to find a way to delegitimize and stop targeting of humans by weapon systems that are either semi-autonomous or fully autonomous or can turn autonomous at any time. Basically human targeting.

AI-tocracy (58:01.07)
Ty, what about you? It's tough act to follow on that, but what is your final thoughts?

AI-tocracy (58:55.468)
And for the last question, and we can go rapid fire through this, if you could recommend one book, movie, or other piece of media for listeners, what would it be? And then if folks wanna follow up with you or perhaps with your organization, where would they go if there's a Twitter handle or perhaps Blue Sky or wherever, where can folks reach you? And Attai, let's start with you.

AI-tocracy (59:25.316)
I tried, I always try to make the turn and this is more awkward than most times, but we're gonna do it. We're gonna conclude with the same question every time.

Ayca Ariyoruk (01:00:20.148)
thinking about a song there's this band that I have discovered a few years back special person helped me discover this band they're called Brothers Osborne and it's a it's a country band country rock band and they have these they had their songs the lyrics I mean the music is on the part the lyrics I would just recommend the song ROM and

AI-tocracy (01:00:58.979)
Awesome, and for both of you, imagine LinkedIn is a good place to reach you and to engage. If there's anything else you want to plug, this is the plug time.

Ayca Ariyoruk (01:01:05.972)
Yes, CIDP.org is where you can check out our artificial intelligence and democratic values index.

Ayca Ariyoruk (01:01:18.204)
on 12 metrics you will get an idea about where the countries are in terms of their AI policies and yeah.

AI-tocracy (01:01:28.813)
Wonderful, well, thank you both so much for joining me today. This has been a really meaningful conversation to me and also I have learned a lot and I hope listeners have as well. And thank you for whoever is on the LinkedIn since the YouTube didn't work today for the live stream for Aicha and Etai. The episode will be coming out next week and so we will share with our audience in podcast and video form then. But for now,

Ayca Ariyoruk (01:01:44.086)
You

AI-tocracy (01:01:56.749)
Thank you so much for joining AI Talker See Live this week.