Urban Innovate TALKS explores how cities, technology, and people shape the future of mobility. Hosted by Dr. Henriette Cornet, the series features conversations with global experts on autonomous vehicles, public transit, and the human side of innovation — including insights from pulsur, our AI platform for understanding public sentiment and travel behavior.
Speaker 2 (00:14)
Hi everyone and welcome to Urban Innovate Talks. Today with us we have Katie Evans who is for me a very special guest, very different from the guest I had so far. And I have to say I was a bit nervous preparing the episode because it required me some preps that I usually will not take so much because I'm in my comfort zone, let's say, talk to experts around autonomous vehicles, engineers, business people.
Katie is different. Katie is a philosopher. And I hope everybody still stays on when I say that and get actually curious. And don't worry because we will talk about autonomous vehicles. We will talk about AI. That's really a topic I want to dig into detail, especially ethics around AI. And just a word to introduce Katie. So Katie has a PhD from the Sorbonne University.
And she focused her work back then on ethical decision processes of autonomous vehicles. So here you go. There is a connection. And also she's ⁓ working as an assistant professor at Sorbonne University. She's a consultant and she did some amazing work for UNESCO. So UNESCO is a United Nations Educational, Scientific and Cultural Organization.
And under their umbrella, you were the author of an amazing graphic novel called Inside AI, an Algorithmic Adventure. So love the title. And I hope we'll take some time to dig into what it is. ⁓ But yeah, that's the way I perceive you. But you do want to add. So welcome first. And do you want to add anything to this introduction?
Speaker 1 (01:58)
Thank you. No, that was really lovely. Like I said, maybe ⁓ in my consulting, I mentioned since we're talking about mobility that on behalf of the IEEE, I represent them at the United Nations Economic Commission for Europe, UNICE, which is responsible for global harmonization of vehicle regulations.
Speaker 2 (02:21)
So yeah, nice that you mentioned the work at the United Nations around harmonization of regulations, because it's actually where we met at a meeting from WP29, the GRVA.
And I'm still following this activity through the European Commission, which I advise for updating these regulations. So there's a lot of work there. But when we met a few years ago, I was very impressed by your mindset and the way you look at all these issues. And that's why I wanted to keep in touch.
Would you like to say a word about this novel, this graphic novel that you authored?
Speaker 1 (02:55)
Yeah, so before I did philosophy, I have a bit of a background in screenwriting, went to film school. And so I was very much in the business of writing maybe not science fiction, but certainly comedy and full fiction. ⁓ And so then getting the remit to do the graphic novel was incredible for me. I think it's still the most rewarding project I've ever done.
But it was also the most challenging because despite having skills as a subject matter expert from the AI side and skills as a screenwriter, it's very difficult to actually engage in science fiction ⁓ when you're writing AI literacy because you're always trying to explain something and make it accessible. And it's very difficult not to traffic in your own values and views as you do that. And so, yeah.
A good friend of mine helped me write this. He's a screenwriter too. His name is Makshaba. And so when we were working through the hard parts, ⁓ the slogan sometimes, kind of our operative idea was that fun is not really accurate. So there's a lot of really fun plot devices and twists and tensions, like from a dramatic perspective that you could add that you kind of can't do in AI literacy.
But I also think in the case of the comic book, the thing that was the most important and what I think is the principal mistake when we try to educate people about AI is to make it a person or even a character in the story. Now it's an open question what AI is, if it's an object, a subject, a substance, whatever, right? But the problem is having a cute little AI robot that explains itself and justifies itself kind of humanizes it. And from a scientific perspective, that's not accurate.
Speaker 2 (04:32)
Yeah, and this personification, right? We give AI like a mind, like a consciousness that it doesn't have.
Speaker 1 (04:38)
Right. Yes, and especially because when we're told stories outside of literacy, we're always supposed to attach ourselves to the characters, have empathetic ties to them. And this is what drives the plot forward because usually the main character struggles, et cetera, et cetera. ⁓ And so it's difficult to be kind of value neutral and make characters.
because then you're kind of in an environment and you could understand what this AI technology can do for you as a citizen, but also from a technical perspective without having to introduce the good guys and bad guys and moral lessons, which obviously given UNESCO's position, it's important to be as neutral as possible, right?
So it was a great exercise and I still like to think of AI as like an artificial place. It's one of my things that I do in philosophy.
Speaker 2 (05:25)
Yeah, I love that. When I was hearing that of AI as a place, I was thinking while I'm building my tool, and we will come back to that, my AI tool now, pulsur I'm like, okay, if pulsur was a place, what would it be? And I think I will go through the exercise, the thought exercise, to really try of not thinking of it as an agent that is like having a, that has a purpose, so to say, but more like as a place where the...
public transport agencies in the city can navigate and and AI more as a compass
since we mentioned the personification, like we can jump into the topic now. And there is something I realized while working in the area of autonomous vehicles that the idea is to be safer than human.
so my view would be if it is already just a little bit safer, it's already good enough. It's already better, so to say. But some people expect the zero accident, like absolutely no accident at all. And what I notice with some other incidents or even accidents that were inducing people and AVs is how emotional people get when
it is caused by an AVs. And it upsets me very much because we don't talk about all the road, fatalities that happen every day. it's on the local news sometimes it's if it is spectacular, but every day people are dying on our roads and we don't talk about it and we are not shocked, et cetera. But you have an accident with an AVs and then it will take months in the media,
we're not even very scientific, it was very emotional. So maybe it's more a psychological question than a philosophical one, but I would love to hear your view on that and why are we getting so affected when a machine does a mistake that harm people.
Speaker 1 (07:12)
Yes, so thank you so much for asking this question. I love this question. And you're right, there's a psychological component that me as a normative researcher, I won't be able to meaningfully respond to. There's a philosophical component, I think, that I can kind of flesh out. So I think there's a couple different points and they're all sort of loosely connected. The first is that since the beginning of science fiction, if you think of Asimov and even the ancient Greek tradition,
The idea was never that the machine or the automated automata was fallible. It was perfect. And what went wrong was that it overapplied this perfection to our fallible human morals. Like, you know, the Asimov laws work and they're perfectly applied by the machines. But the issue is that the human complexity of social life and morality make it so that perfect application is a bad thing. So
Because of the science fiction component to link it back to the beginning, we have this expectation for perfection that comes, think, from how we view technology when it was still a total dream, you know. So there's that side. And then I think that another side has to do with the difference between like an engineering mindset and an end user mindset.
So I love talking to engineers about this question. It comes up a lot and they're always like, why do we have to be perfect? Who's perfect? There's no such thing as perfect. And they're right, right? Because from their perspective, I think they're trying as hard as they can just to get comparable levels of performance with humans. That's already an incredible feat to be able to design a vehicle system that can handle the dynamic driving task in like a complex ODD. Wow, you know.
And so then they're like, well, what do you mean it has to be perfect with no accidents? And every time there's an accident, it's like the end of the public acceptability of the technology for sure. But I think what it says for me as a philosopher is that there's more to driving than just efficiency and performance and maybe even safety. Right. Like I think we have to understand that people are giving up something more than just driving from point A to point B with automation.
I think they're giving up like a certain amount of agency as well, right? Maybe some preferences, some freedom. Some people just drive their cars to drive their cars, just to think about something. mean, this is for ecological reasons, kind of not trending right now, but in principle, people enjoy, like they find it pleasurable to drive. And it's a big expression of individual autonomy, especially maybe in the American lexicon. So I think the idea is here, if you're gonna give all that up.
It's not enough that it just performs as well as me because then I should still be able to drive. has to be that it gives me something more that compensates for this deeper kind of value loss, opportunity cost of not being able to drive myself around even more anymore. Even though from a safety gain perspective, statistically it already makes a lot of sense rationally, right? Yeah.
Speaker 2 (10:07)
Yeah, I see that. Yeah, and it goes also another aspect, which I realize in this type of values that we put into the discussion around autonomous vehicles, but it goes beyond autonomous vehicles. can go also in the direction of automation in general, robotization, like having more and more automatic processes and AI replacing a lot of jobs. So the big word here is jobs.
And it's threatening jobs. And a good example is one of the bus drivers. there are multiple reasons why public transportation is looking at automation. And I think they should even more look at the technology itself. But there is a shortage of bus drivers. There are some cost opportunities, some flexibility they can gain, et cetera, et cetera. So the list is very long. And I'm talking about it all the time. But if you speak to people outside,
these bubbles of experts doing automation, there is kind of, can I describe that? People feel almost sadness, I think, or like it's maybe also fear that you remove someone that we come to the bus driver in the morning and you say, you don't have to come to work today, your job is canceled. And there is, there is a of dramatization coming where I don't think it's how it will happen and it will be gradually So first,
I would like to hear your point of that. But for me also, it goes to like next question about hard jobs. Like there are jobs that existed before then. And I'm thinking things like there were people lightning lamps on the street individually, right? Before electricity came, when electricity came, all these people they did not have any job anymore. So
I'm sure these discussions happened back then, but I think it's kind of the progress that step by step, technology progress replaces certain job. And it's always kind of hard to make the transition. And maybe it's easy to say because I don't see directly how it will replace my job. Although I'm sure it will at some point as well. But like why, why being so upset about it? You know, why from maybe a philosophical, maybe it's also more like sociological this time.
Why being so upset about this idea that a machine takes over your job if your job is in the first place? Either the machine will do it better than you do or also the job was hard in the first place.
Speaker 1 (12:26)
This is such an interesting question. Again, you prepared such amazing philosophical questions. So yeah, okay, there's so many different ways I could attack this and I don't want to spend forever trying to flesh it out. But okay, the moral panic, the reaction, I think that is a psychological and sociological question. But you're right. mean, there is this kind of progress is progress narrative where yeah, people are always going to have the short end of the sick with logical innovation.
And sometimes people are gonna get a little bruised, but eventually they kind of find their way. But I wanna kind of maybe pick out one little part of it. And this is the idea, if it is right, like morally correct to take those.
away from people, especially if automation can perform the job more safely or more efficiently, right? Because I think that's sort of the crux of the debate. Oftentimes, the arguments towards taking away someone's job is for this efficiency thing and the safety claim, especially in automotive. So I think maybe it's good. There's a really seminal article in computer machine ethics, AI ethics, but back in the eighties by a philosopher named
James Moore, and it's called Are There Decisions Computers Should Never Make? Which is, think, very much the crux of our discussion right now. And in it, he kind of debates under what circumstances could we delegate decision making to machines kind of exactly construed. ⁓ And he makes this distinction between decision making under clear standards and fuzzy standards, where clear standards is like a game of chess. It's very clear and obvious empirically what answer is correct and what is incorrect.
And so statistically, can very much look at that and say, okay, you you're a good chess player because you won so many matches and every time it's clear that you definitively won the match, right? Whereas fuzzy standards, it's more like choosing a career, he says. So there's more than one right option, right? There's more, many things could be correct. And the parameters even of the decision-making are not so clear. They're not universal and they're kind of open textured.
And so he says that it's clear that if, you know, the machine performs better than clear standards, right? Under clear standards, we should implement the machine clearly. when it comes to fuzzy standards, it's difficult to measure performance kind of. And so this is kind of where we're sitting, right? Because maybe there's more to driving a bus than, than just getting everybody from A to B, right? Yeah. And I think like,
The point that I'll make that's more philosophical here is that I think like what we try to do sometimes in tech, especially, you know, when we're trying to really build a robust system is to take a fuzzy situation and try to make it clear. So we're going to like take one value like safety, like physical safety and just maximize it. Right. That's the thing that matters most. It's the only thing that matters. And that's also what ethics requires. Right.
And kind of everything else that could be fuzzy and interesting kind of falls by the wayside or is secondary to that principle aim, right? So I see that there's, I think there's this dream that we're able to kind of fold all this fuzziness into some clear definition and so much work in AI policy, AI developments, and even AI ethics is kind of working on that. I think we'll come back to it when we talk about, you know, AI ethics principles and that kind of stuff if we do. ⁓ When I think we have to accept that almost every
real world deployment case of an AI technology is a fuzzy standard situation. And because of that, what's right to do relates very much to the values you're trying to maximize. Right. So if it is about economic efficiency, if it is about safety performance, well then yes, you are correct to do that. So then the onus is kind of on us as individuals and developers to really know
the values that we're trying to maximize and to be justifiable to the people, right? So instead of being right and saying, well, obviously rationally, this is correct. We can say, okay, this is where I'm coming from. Is this justifiable to you? And then we move it from like the ethics abstract moral sphere into something more political, more community-based, right? Can you provide an argument as to why you should replace this job with automation? And is it acceptable to the people who buy it?
Speaker 2 (16:36)
Yeah, I love that. I love it because I think when we work on this topic and the way I started working in autonomous vehicles, I asked all these questions myself at the beginning and kind of sorted them out for myself and I went for it. But I did not have the impression there was so much debate about it. It's like everybody's having his own value on why it is meaningful to do that and push it. But sometime when I discuss with my peers, I realize, it's not
kind of the same, you know, for me I'm really focusing on like this flexibility aspects, of course, and cost, I know it's also a bit of a hard thing to say, but the wages of the drivers are inflexible in the way these wages are managed. And if we could shift the flexibility, not only of the driving of the way an AV operates, but on the way the human is helping out the entire service.
So for me, that's where I can see like kind of optimization, but I'm an engineer, so I speak a lot of optimization.
Speaker 1 (17:38)
So I think that what I see happening and like with this question of a public institution, like what's the scope of moral responsibility, let's say, with these questions, it has everything to do with the power and scope of the technologies that we're developing. If we were just talking about rudimentary, you know, standard vehicles,
This isn't part of it, right? But everything in every sector, not just automotive and transportation, but in every sector, all the major stakeholders are asking themselves, what is my responsibility here? Because the things that I'm leveraging now have far reaching and also very unforeseeable impacts on stakeholders that I wouldn't even imagine typically to be part of the picture. And this is, I think, a consequence of the power of the tools that we're messing with right now.
My short answer is we have to take it seriously and do our best.
Speaker 2 (18:30)
Yeah, it's a great answer. And I like how you put it back to these values that I think should be mentioned, should be explained, should be very transparent into what are we trying to achieve here and seeing that as a tool and not like a goal per se.
But staying in the AV space before moving to more like AI and ethics and data, et cetera. So many services that are being developed in the AV space are around on demand and door to door, something very similar to what Uber is doing. And we see like Waymo being one of the biggest players.
But if you move to an on-demand and a door to door service, you kind of go into these types of apps where you have to register, where you need some
security level so that you have to give a lot about your private data to the app, to the service like Uber, where they can control the identity of the drivers, but also the identity of the people driving. And that's, you do the same on Airbnb and that's how you trust the system that people have been checked beforehand and that it's safe.
But if there is this booking mindset, you will lose anonymity after a while. And this topic that you can just buy a metro ticket and jump into the metro or the bus, typically in Paris, for instance, this will not be maybe at some point it will disappear. I don't know. I'm like questioning a bit, like if there is a risk that anonymity will disappear from our cities and from the public space. But then for me, it's almost not public anymore, you know, if we have all our
digital tag on our head of who we are. So are we getting there? What's your view? Maybe it's also, maybe I read too much science fiction there.
Speaker 1 (20:14)
Yes, no, I don't think you're reading too much science fiction or if you are, I'm reading the same book. yeah, no, I think in the public transport context, yes, I think that we are shifting towards this no anonymity, no privacy, totally trackable environment. mean, the actual transportation is part of it, but there's also the larger kind of public surveillance question.
Let's not forget all of the data collection and all that stuff that goes on with your smart devices, the Internet of Things. So the technosphere, as you would call it in philosophy of technology, is thick. It is dense these days and it shows no sign of stopping because I think so many of the incentives when it comes to privacy overreach come from safety, either the safety in terms of performance of automation and other systems or
safety in terms of public safety vis-a-vis other people in the environment, So this says a lot about the types of values that we prioritize.
The privacy versus convenience straight off. really have a hard time with and especially it's cultural because I think in France, for instance, where I live in Paris, it's less of a thing. know, we have
ride sharing apps and Uber and we have, you know, Uber Eats and all of these different platforms so that we can order everything at our whim. We have Amazon too, but I think people still go and buy their vegetables someplace to go for walks, right? Because it's a 15 minute city, everything's accessible. And so in that sense, the surveillance components aside, the trade-off is weaker, maybe in European environments where it's easier to be a pedestrian, right?
Because I think that like, you know, public transport has always been a petri dish for, you know, moral progress. I mean, think of Rosa Parks, where else was she going to be able to make that move, right? It wouldn't be in the middle of a public square, right?
But no, think empirically, we're definitely moving away from that. And philosophically, yeah, it's a it's one of those progress is progress changes, except for this trade off between convenience and privacy. I really think that we should have
a bit more moral ambition there, you know, like not to call up the Wall-E example, but I feel like this is on the Wall-E long tail. Yeah. Yeah. Do we really need it?
Speaker 2 (22:35)
Yeah, that's good example. How much are we willing to lose, so to say, just for convenience reasons.
And I think that's exactly what it is about. which type of cities do we want for later?
What do we want and is it all about convenience and safety and life is more than just convenient and safe.
So if you agree, I would like to spend a few more minutes on the topic of this tool that I'm developing called pulsur. So it's a tool within Urban Innovate. And I say, am developing. That's a bit an exaggeration. I'm collaborating with amazing people on making this tool happen. And what it is doing is taking data from
the social media space from the surveys that exist already or from direct feedback all around like public transportation or urban mobility in general, how people experience transportation in a city, collecting all this data, using my own IP sorting on ⁓ this data with the support of AI and then profiling
My goal is really to classify all of us into like travelers. So are we car regular users? Are we transit captives? Are we rarely mobile?
Are we transit potential, occasional, et cetera? And I have all these profiles defined that was part of my, of my PhD thesis And back then everything was made manually. And now I thought, okay, with the coming of AI, I really want to put that in a tool together that could be useful for cities to better understand not only their riders, not only the people using public transit, but also those who aren't. So that's the big challenge I've put there.
And later you can do forecasts on revenue, on ridership. there are the potential of the tool, think are huge. And I'm like selling myself here and the tool, but thinking of AI and also it was following, ⁓ it was following a talk from you that I, that I watched recently. I realized that the way I designed pulsur for now I have my theoretical background.
I'm a scientific by training. So I want control. Like, am I gonna miss something doing that?
So we'd like to hear your view on that. this idea of how much do we want to control in the way we program algorithm around AI and yeah, where are the risks there and what does it tell about a product, the way it is designed?
Speaker 1 (25:05)
Yeah, I think you hit the nail on the head with the transparency component. But I don't think there's any kind of hard and fast answer here.
I think though that there are some questions that you can ask to kind of know what the boundary is and what's right for you. think the first is to understand like, what is the safety criticality and the societal impact of your system? So what I mean by that is like you ask yourself questions like, what do people stand to lose as a result of, for instance, not using your tool once it's widely adopted, right?
What did they lose access to? What did they lose the opportunity to do? Is their agency restricted? Not just physical harm, but the whole package, right? And also if they use it poorly, what happens, right? Like how do they suffer? Do they suffer financially, psychologically, physically, monetarily, economically? Or if they're misclassified by the system, right?
I think that's always like the biggest panic for us as AI ethicists when you do let's say ethics assessment of systems and these kinds of things is yeah, how are you categorizing people when they stand to lose a lot as a result of your tool or system or service or product treating them correctly or incorrectly?
Speaker 2 (26:18)
it
reminds me of your novel actually where there was something about getting a job, right? And if you are discriminated by AI, because the way AI was programmed, you don't get an opportunity for a job. It's huge. So is it that you're asking, it at that level?
Speaker 1 (26:33)
Yes, that's right. think that that's the first and this also kind of aligns with like emerging legislation in the EU to this risk based approach to understanding the level of risk of systems and then what acceptable risk looks like. So it's all kind of bundled up in there.
Like, are you using people's opinions in good faith, for instance, right? And as social scientists, sometimes there's this fine line between stated and revealed preferences, where revealed preferences is always way more sexy, because that's where the real gold is. And true, enough, yes. But then, you know, especially if the tool is working with public institutions, you also have this kind of responsibility that you have less in the private sector.
of upholding constitutional rights and principles. And so it makes it a little bit more of a constitutional question as well, how you handle and what types of data categories. So I would say, like I said, there's no right way to do this,
you don't have to be morally perfect. I'm sure I'll make this point again and again. It's impossible. True technology, right? You're always Yeah, you're always kind of justifying why you probably had to make some compromise that like some of you are comfortable with but some of you are deeply concerned by in your design team. You know, but
The whole point is to be able to justify it to the public, to stakeholders. And if you look at what's happening in AI governance and best practice, that's very much the approach, like stakeholder engagement, participation, people aware of what you're doing and how it's going to impact them. That's the best that you can do.
Speaker 2 (28:08)
And so I think we are approaching the end, but I want to give you really the opportunity to share if you want to share something maybe about your work, things you have been interested in lately, maybe even some like a positive message for them. I mean, everything was very positive. You say it was not too dark. We did not went into like a dystopian ⁓ future, but about AI and ethics, what
What should we read? What should we consider? Please tell us.
Speaker 1 (28:39)
Okay, yeah, so there's some key lessons, let's say, that I do in my course. I think that they're pertinent here. The first one, I already spoiled it, it's that there's no such thing as neutral technology. So much about, you know, even algorithms are normative, performance measures are normative. You're always gonna traffic in certain conceptions of the good life and values and what you do. So the best you can do is to be aware of them and be able to communicate them and think about them. And that requires...
you know, an interest in critical thinking and taking on your responsibility as a technologist. And second, maybe sometimes we have a temptation with all the power that we have in technology to want to improve humanity with it. And I'll make the distinction between improving humanity and improving the services and products of humanity, right?
Speaker 2 (29:26)
I
love that. I think I could see myself there.
Speaker 1 (29:29)
Well, but it's so natural because you're like, if only people could collaborate more, be more rational, make more sense with each other, be less racist and ⁓ we could fix that, especially with some careful nudging or a great platform or, you know, some kind of program. As a philosopher, one of the liberal tradition as well, I strongly caution that only because that let's say the normative
environment in which we live in society is complex and there are so many rules that we abide by that we don't even know, you know, they're not even explicit norms. They're just things floating around that people do and they vary in such unique ways that you really have to tread lightly because there are crocodiles and you risk trampling on something really great because you're distracted by forcing something you think is really good. You know? So, so yeah, so that and then maybe...
more is that if you're a designer and you're overwhelmed by AI ethics principles and their applicability, I hear you, right? These are very abstract things and they seem to mean almost nothing at the engineering level. There are people that can help bridge that gap. And I think that's part of AI innovation also is to cultivate that space. Also standards and best practice will help with that. But in general, what this means is that the
Again, the scope and the power of the technology is so big that now we have unanswerable questions that we have to solve for in engineering and technology, right? So you cannot expect anymore to find a perfect solution for your problem. You have to expect to provide really good justification. But this is a big paradigm shift from how we usually think in engineering. We're like, okay, I can, you know, I can find a way to optimize that.
Speaker 2 (31:20)
I'm a solution.
Speaker 1 (31:21)
And it's super frustrating when you can't find one, but welcome to philosophy. You're playing with powerful tools. And so now there's no such thing as the right answer. It's all fuzzy standard.
Speaker 2 (31:29)
Wow, that's a perfect final word. I suppose if companies are listening to us and they would like to know more on AI and ethics,
Speaker 1 (31:38)
I work a lot. Thank you for the plug. Yes. So I have my own consulting firm, is called Plathoria Consulting. So I do stuff in automotive on, you know, AI strategy, AI ethics, policy codes of conduct, that kind of stuff. But I also do a lot of work in standards developments and certification programs, including actually conducting certifications. So that's also something interesting. I do that work with the IEEE. ⁓
And so it kind of varies, but the idea is just to kind of, you know, get people to start thinking about these concepts and not being so afraid of them and kind of working that into a workflow, whether that's from a technical perspective or from an organizational governance perspective as well, because you kind of need both, right?
Speaker 2 (32:22)
You have also a newsletter, which I also recommend subscribing to. So you need to be a bit awake while reading. It's not like something you scroll very quick through. So everything you shared today anyway was very high level. And thank you so much for educating us, if I may say, for bringing so much light on these difficult, tricky topics that as engineers, maybe we tend to push a bit aside.
think that it will be resolved by itself. But no, I'm very happy we had this conversation. I'd love to have you again on Urban Innovate Talks. And yeah, let's continue the conversation soon.