In an uncertain world where AI technology is disconnecting us more and more each day, we are dreaming of a world where technology connects us. The premise of this podcast of simple: let's talk to people and imagine the most hopeful and idealistic futures for technologies that connect and do good in the world.
AI-tocracy (00:01.134)
Let me double check this over here. If there is anyone that has just jumped on the YouTube, just know that we are getting all set with all of the YouTube related things. Viewing the channel, going live. It says that it's upcoming. Oh.
Tom Williams (00:20.101)
Yeah, I see a tab that gives the percent uploading, and much like the Windows Update percent, it keeps going up and then down again.
AI-tocracy (00:23.96)
Yep. Yeah, I recently did an interview in which that was a little difficult. The uploading is sometimes small. Anyway, I'm just watching myself talk back to you right now with like a 10 second delay. So I'm gonna close that out. I'm gonna post a very quick LinkedIn post, even though I think I'm flooding the LinkedIn algorithm. And so I think it's not really circulating things that I post, but that's okay. LinkedIn is not the...
Tom Williams (00:43.729)
20.
AI-tocracy (00:57.068)
the Twitter that Twitter once was.
Tom Williams (01:00.239)
Now.
AI-tocracy (01:02.37)
We are live on YouTube with Dr. Tom Williams to discuss his upcoming book, Degrees of Freedom.
I want to make sure I get the full title of your book on robotics and social justice.
AI-tocracy (01:32.208)
And with that...
AI-tocracy (01:47.599)
Strangely enough, it didn't seem to want to share the actual preview with it, but it's okay. It's posted. We're good I've done my I've done my due diligence of posting the various live streams and links to it So yeah, let's let's get started if you're good to go
Tom Williams (01:55.281)
It's all good.
AI-tocracy (02:08.716)
Welcome to AI-tocracy, where power and AI intersect. And we try to break it down in ways that are relatable and hopefully accessible. And today we are on the line with Dr. Tom Williams. Tom, welcome to the show. Yeah, so this is a very special episode for me to do because Tom has been a personal supporter of mine over the years and was episode
Tom Williams (02:25.329)
Thank you for having me.
AI-tocracy (02:38.178)
three, I think, or in that first batch of episodes that we released for Radical AI almost five years ago, maybe five years ago, which is amazing to think how time has flown. And so this is also a good way for me to catch up with what Tom's been up to in the world. But yeah, I think this will be fun. And today, Tom, we are, of course, talking about your upcoming book, Degrees of Freedom on Robotics and Social Justice.
Tom Williams (02:40.507)
Mm-hmm.
Tom Williams (02:52.39)
Hehehe
AI-tocracy (03:06.188)
But before we get into the book itself, I'm gonna ask you the question I ask everyone, which is, is there something particularly inspirational going on in your life or in the world right now that you'd like to lift up?
Tom Williams (03:18.927)
Yeah, I don't know what types of answers you've been getting to this question, but the one I'll give is the thing that has been inspiring me is for the past year, I've been spending most of my free time doing improvisational theater. And just the ways that that community serves to lift each other up, serves to make sure that it's creating a safe space.
AI-tocracy (03:36.255)
cool.
Tom Williams (03:48.849)
For everyone, the ways that members of the community engage in mutual aid when folks are laid off from jobs or et cetera, and just the breadth of people who are working in different areas like education, healthcare, et cetera, that it's been just a joy to meet. I think the thing that's been inspiring me is participating in that type of community that's just entirely dedicated to the art of listening and understanding each other in conversation.
AI-tocracy (04:18.965)
that's amazing. So that does seem to tie into some of the topics in your book. So I'm just gonna start by, hey Tom, you have this book coming up. Tell us about it.
Tom Williams (04:26.608)
Yeah.
I do. Sure. Yeah. you see AI-tochracy, right? You said it's at the intersection of power and AI, right? And that's basically what this book is about. It's the intersection of power and AI as it pertains to robotics. It's all about the ways that robots and robot designers in particular reinforce certain inequitable power structures in today's society. The ways that
that is grounded in the history of robotics and the specific sort of literary and engineering factors that led to the emergence of robotics and the ways that that then today helps to explain the specific types of power that roboticists are wielding, the default effects when they wield the power in the ways that they tend to, and the solutions we might have to try to subvert those dynamics.
AI-tocracy (05:29.207)
with the, so robotics is a big topic. And I feel like it's transforming very quickly in how we think about it, what is covered under this umbrella. So before we go into the history of it, when you're talking about robotics, is there a particular either definition you're operationalizing or a way that you're considering it? Yeah.
Tom Williams (05:31.813)
Yeah, it is a big topic. Yep.
Tom Williams (05:42.873)
Mm-hmm. Yeah.
Tom Williams (05:51.365)
Hmm. So that's challenging. And we talk about the definition of robotics, right? The beginning of like every class I teach, right? And it's hard because a robotics started out not as an engineering concept, but as a science fiction concept. And robotics has evolved over time as this sort of mutual interplay between science fiction and engineering. engineers get into robotics because they see movies that excite them. Right. And so the definition of a robot to some extent is
AI-tocracy (05:57.667)
Yeah.
AI-tocracy (06:16.097)
Yeah.
Tom Williams (06:20.847)
When you think of a robot from popular culture, what do you think of? In more practical terms, we typically use a definition like a physical thing in the world that takes actions to act on the world, which sometimes works and sometimes also accurately describes like thermostats and elevators and dishwashers. So it's, it's, it's hard, but they, tend to be things that are in the world, as manmade artifacts that are acting on the world who's
appearance and the way that we think about them is the result of this, history of the robot imaginary.
AI-tocracy (06:55.949)
Can you say more about the robot imaginary?
Tom Williams (06:59.855)
Yeah, so the robot imaginary is this, is the sort of shared cultural understanding of what a robot is, right? And so this relates to the word robot that came from Carl Chapek in his play, Rossum's Universal Robots. It comes from the work of Isaac Asimov, which built on Chapek, which gave us the word robotics.
And it comes from the earlier literature that both of those built on, like Le Futur, which was a French science fiction story which popularized the word Android, and which was sort of a reactionist work that was raising concerns about the women's suffrage movement in France.
And earlier work like the Steed Man of the Prairies, which was maybe the first appearance of a robot in literature, which used the mechanical man as a sort of a metaphor for black slave labor and what American white southern men were going to do with the end of slave labor in the United States. And so the imaginary
is sort of that history that has evolved around what we think of when we think about a robot, what it looks like, where it's used, etc., which then shapes how we continue to depict robots in science fiction, how we design robots in engineering labs, and so forth.
AI-tocracy (08:39.117)
When we're talking about the history of robots, it sounds like you're... Sounds like it starts somewhere. This idea, it sounds like there's maybe a technological part where it's like we're building robots and then a more story narrative driven part. I know some folks draw the history of robotics back to say the idea of automatons in ancient...
Greece, believe, and maybe in more like biblical sense, like the Golem. Do you you agree with that? Like is that for the robot imaginary? Does that like jive?
Tom Williams (09:14.865)
Yeah, I think it's complicated. think so the sort of order I would say is, you've got in ancient Greece, you've got the ideas of a automata. And even at that time, they were being envisioned as a replacement for slave labor, a different type of slave labor than, you know, antebellum United States. as that's how Aristotle wrote about them. But
sort of as a, it still sort of a far future concept. It wasn't viewed as like an engineering project. You then have early automata, which were engineering projects, but weren't designed for tasks, right? They were used as art pieces or they were used in like a cult rituals or for like stage magic or things like that. Then you have the steam out of the prairies, which I view sort of as the origin of
the sort of modern robot imaginary because of the ways that it interconnected those two, right? It was a character in science fiction, but that was directly inspired by a patent a couple months earlier where people were trying to literally make a real engineering product. And then that inspired through the early history of science fiction, about a hundred years later, you get Asimov.
writing his short stories about robotics. And you have, yes, mid 20th century. And you've got Engelberger, who was inspired by Asimov's stories and worked to make the first factory robots. So you have sort of these, again, purely literary to engineering curiosities to this moment in time in the 1860s.
AI-tocracy (10:45.836)
And that's like mid-20th century about is where we're in.
Tom Williams (11:09.605)
where suddenly these are brought together in a generative way that then leads to, a century later, robots emerging in factories and appearing in science fiction in a very specific way.
AI-tocracy (11:22.924)
When you're saying like appearing in a specific way more in the Terminator way or more in a different way?
Tom Williams (11:30.033)
Well, as human-like entities that are designed and envisioned as a replacement for slave labor in the United States that are being put into work environments where they are performing the types of tasks that low-income people and people of color are performing.
AI-tocracy (11:58.189)
Yeah, well, and there's a whole, there's a lot going on right now in June of 2025 about AI replacement and robot replacement of labor or of jobs. And so that is a backdrop of this conversation a little bit because there are justice issues baked in to all of it. It looks like you wanted to say something to that.
Tom Williams (12:03.535)
Yeah. Uh-huh.
Tom Williams (12:09.648)
Yeah.
Tom Williams (12:14.063)
Mm-hmm.
Tom Williams (12:19.471)
Yeah, it's why I was so I just I was at ICRA, the big robotics conference, you know, probably the largest, you know, main academic robotics conference a couple of weeks ago. And I was leading the AI ethics forum there where I brought together people who are working on issues of surveillance and labor and so forth related to robotics. at ICRA, they've got a giant expo hall, right. And all these robotics companies are showing up, showing off
their robots. And 90 % of it was humanoid robot, robot dog, humanoid robot, robot dog, right? And with the humanoid robots largely being depicted or envisioned in some type of manufacturing context, the robot dogs primarily being used in more like security or defense contexts, right?
And it was sort of disappointing because the technology is very quote, cool, right? In terms of the engineering efforts needed to get it to work. But it's just seeing such a large percentage of the engineering community devoting their time and effort towards sort of continuing this history of robots automating or degrading labor in industrial contexts.
or these new uses of robots that we're increasingly seeing of robots being tools in policing, which both of which I think are potentially very harmful to society and thus just depressing to see the greatest minds in robotics that are coming out of grad school primarily being funneled towards those domains.
AI-tocracy (14:15.466)
Yeah, well, so this is an aside, but I am curious about your take. What's the deal with robot dogs? Like, why are all the things that have, say, Boston Dynamics, right? And this was maybe decades ago, at least one decade ago, where it was the big thing is that they came out with this robot dog that was doing robot dog things. Why? Why is that the animal we chose?
Tom Williams (14:38.095)
Well, I think that it is because the quadruped form from an engineering perspective makes a lot of sense, right? It doesn't have the balance issues that you're going to have on a two-legged, know, bipedal robot, right? And then if you're looking at that, actually some of the early ones that Boston Dynamics were making were these larger like...
pack mule donkey type things, right? But like the smaller form factor is that you're going to be able to sell to a research lab or to most like non-military domains. He's a smaller quadruped and you know, dogs just sort of make sense in that domain. And so I think the form factor, it's not that they were, I don't think it's that they were just thinking let's make a dog and then made that it's they made the form factor and then they decided to sort of like back describe it as a dog and program it to do.
dog-like things to help to solidify that metaphor. Which I think is another example of the interplay in between engineering and science fiction and what we think we're designing on the technical side, how we describe that in a way that seems more magical or aligned with science fiction or future looking and then the way that feeds back in and shapes the actual engineering activities that are done.
AI-tocracy (15:56.576)
Yeah, well, I wanna work our way backwards to where we are right now, or maybe, I guess, forwards, since I think one thing that your book does is traces that history. But we're kind of talking around this idea of social justice and the language of social justice. So as a reminder, the book's title is Degrees of Freedom on Robotics and Social Justice. Talk to me about that second part about robotics and social justice.
Tom Williams (16:11.494)
Mm-hmm.
Tom Williams (16:16.497)
That's it. Yep.
Tom Williams (16:20.849)
Yeah, so the main framework that I use in the book is the Matrix of Domination, which is this framework from Black Feminist Thought, which in part analyzes the ways that power is wielded in the service of oppression, right? And so the Matrix of Domination breaks power down into four categories. There's interpersonal power, which is the power we wield through everyday interaction. There is cultural power.
which is like norms, biases, stereotypes that might be reinforced through interpersonal interactions. There's disciplinary power, which is about the sort of the rules, the informal or formal rules of the game that manage oppression, the logics of race, issues of surveillance, et cetera. And then there's structural power, which is about institutions, codified laws, et cetera.
And one of the points I make in the book is that in robotics, for those of us who are, who talk about power in a social sense, as opposed to, you know, an electrical sense, historically people have pretty much only talked about interpersonal power, right? There's so much work on like, can robots persuade you to do X, right? For good or for ill. And my lab has done a ton of that, that type of work, right? But we don't talk about those other domains, right? The ways that robots can re and, and ro-
The ways that roboticists through robot design can reinforce stereotypes through the ways they design robots, the ways that roboticists can reinforce specific logics of race and of surveillance through the ways that their robots perceive people or the ways that those robots reason morally. And the ways that roboticists can reinforce institutions that are used to structure oppression through their decisions of, for example, like, do you collaborate or work with the police?
AI-tocracy (18:20.533)
When we're talking about, say, oppression, and I know in this book you talk a lot about robotics and race. When you're thinking about oppression in this historical context, maybe in this racial context, what are you talking about? What are you thinking?
Tom Williams (18:24.795)
Mm-hmm.
Tom Williams (18:28.741)
Yes.
Tom Williams (18:34.16)
Mm-hmm.
Yeah, so I'm primarily, in the book I primarily talk about race and gender, which isn't to say that there are dimensions like class or sexuality aren't also important. And in fact, those are topics that I also talk about, but it receive a little bit less focus. Simply because of the ways that race and gender concerns have been sort of baked into robotics from
the beginning due to its connection with slavery, with race in particular, but then in terms of the ways that those factors are especially salient in the visual design of robots, Robots are different from AI, especially in the ways that they are often embodied characters that are depicting physical people that are in the real, you know, and are embedded in the real world into human context of work often.
in ways where they are replicating the, or they're put into the same roles as people in ways that reinforce the stereotypes in different ways. Two examples I'll give of that. One is, there was this robot, the Anki robot. It was a South Korean English language classroom robot.
which was, it was this sort of squat form, but then it had a tablet on it which had a white woman's face. The kids are told it's autonomous. It's not, it's tele-operated primarily by Filipina women whose facial expressions are mapped onto that white woman's face as they're tele-operating the robot.
Tom Williams (20:25.721)
Right. And so it's sort of a perfect storm of a lot of these, ways that these issues interplay, right. The robot is being designed with this white woman's face in order to portray what was viewed in South Korea as the sort of correct English language instructor due to the history of American imperialism in South Korea and in the Philippines, which then sort of reinforces that particular stereotype while
also like hiding the labor and hiding the identity and the race of those Filipino women who are who are controlling the robot, which is in the Philippines. Yep. Yep.
AI-tocracy (21:06.984)
And those women are in the Philippines. Okay, so like kind of you're outsourcing that labor as well.
Tom Williams (21:13.489)
which is not just reinforcing stereotypes, but reinforcing particular labor patterns. And so I think that's a great example of the ways that a lot of these systems, whether race, gender, and class, are all interacting and intersecting in the design and the deployment of the robot in terms of what's motivating it and then also with the downstream impact.
AI-tocracy (21:45.056)
Was there a second example? I know I cut you off.
Tom Williams (21:48.173)
did I say I was sure? A second example would be there was a action. One of the one of the examples I use in the introduction of the book and that I think helped to push me on this path was in 2016 when I was a PhD student, I spent a month in Bremen, Germany. And while I was there, a DFK, which is the German AI Research Institute, had their like once a year open house day in in Bremen.
and where you could go and see all the cool robots they were working on, right? And they had this robot, ILA, the Artificial Intelligence Lightweight Android, which was designed for space exploration contexts. But it had this, the form was a, it was in a woman's form. It had sort of female morphology with waist to hip ratio, breasts, et cetera.
And then also feminine cues in terms of like the haircut, the look of the eyes that sort of mimicked makeup styles or the way that maybe you might see women's faces portrayed in animation. And on the one hand, this is maybe a little counter-stereotypic, right? It's maybe pushing back on the stereotype of space as a male-dominated industry. But on the other hand,
I had questions about like, why make this robot so it wasn't like hypersexualized, but it was sexualized. Right. And so I asked the, the designers standing there, the technicians standing next to the robot, why they chose to give the robot this form, why they chose to give the robot a chest. And his answer was it was so we could fit more computer chips in. Which yeah, obviously, obviously it's like nonsense. Right.
AI-tocracy (23:35.498)
Hmm.
Tom Williams (23:41.425)
That doesn't explain why you would constrain other parts of the robot's morphology, right? And when I, I included this anecdote in the book, and one of the comments I got back from one of the early reviewers was, well, this seems a little bit unfair maybe, you know, that was just the technician, you you should talk to one of the designers. And I was like, yeah, that's a good point. So I sent an email to one of the lead designers, and they said,
in their email back, I'm going to try to quote from memory. I can tell you why we chose to use feminine and female morphological features in the design of the robot. In short, it was a cheap hack to attract attention. was something along those lines. And then talked about the ways that they intentionally use femininity to try to gain attention to the project due to
AI-tocracy (24:26.536)
Mm. Mm-hmm.
Tom Williams (24:40.241)
problems they were having on the engineering side. And then he sent me their vision board, which showed all of the different clusters of different feminine robots that they had identified as part of their design process from science fiction, which were extremely hypersexualized. And so all these different clusters of highly hypersexualized robots, there was a line, annotated line of shame.
which the things on the other side of it were like literal sex robots. So they had drawn this line and said, well, we aren't going to make it a literal sex robot, but we're going to draw very explicitly on all this history of science fiction and the ways that specifically feminine robots have been presented in sexualized ways in science fiction and use that to inform how they designed their robot to achieve sort of like their, not sales, cause they weren't selling it, but sort of like their sales goals, right?
And I think that's another example of the ways that like even today, often we talk about this and I think people have a tendency to say, well, maybe a robot is designed as, you know, more feminine, like implicitly, right? And like it's because the designers, you know, that's just, you know, intuitively how they were thinking about it. And they aren't really aware of the stereotypes that they're leaning into, but that's not always the case, right? Sometimes the robots like today are very explicitly and intentionally drawing on this history of stereotypes because sex sells.
AI-tocracy (26:05.553)
Yeah, is do we see that as something that's Increasing right now or it sounds like this has always been baked into how people are like the capitalist machine Right how people are selling this thing?
Tom Williams (26:15.097)
Yeah. I don't know that it's increasing. I think so it's interesting that like with the history of the, of the, like the robot imaginary, the way that the robot form has shifted over time, right? Where again, you, if you, if you start with the steam and the prairies, right? You've got a robot depicted in the image of a, of a black man. If you look at the work it inspired, you then have like Le Futur,
which shifted it to be in the image of a white woman in order to tell stories about the rise of women's suffrage in France rather than stories about the end of slavery in the United States. You then have like Carl Chapek's Rossum's Universal Robots, which kept this white humanoid form in part because it was building on that history and in part because it was a play with human actors.
in the Czech Republic, right? And then when you get to Asimov, Asimov's stories keep the mostly masculine humanoid robot form, but the robot is coded more, continues to be coded outwardly more as white, and the robots are doing things like quoting HMS Pinafore and other...
other things that are like culturally white coded while being put in into domains that continue to be while the robot is outwardly like coded as white and masculine and that's sort of portrayed as the domain of robotics inwardly and like the domains that the robots are put into tend to be the domains associated with either like feminized domestic labor or
AI-tocracy (27:55.507)
Mm-hmm. Mm-hmm. Mm-hmm.
Yeah.
Tom Williams (28:11.237)
the labor of like working people of color. And today, if you look at robot, if you look at humanoid robots, obviously none of this applies to like robots that are like, you know, if you've got like a robot designed to look like a bird, right? This doesn't apply or like Boston Dynamics dog, right? Like these visual characteristics don't really apply. But if you look at humanoid robots today and people have done analysis, like corpus analysis of these, they tend to either be portrayed as
AI-tocracy (28:24.937)
you
Tom Williams (28:40.175)
white and masculine when they are in domains that are traditionally masculine, or as white and feminine when they are put into domains that are viewed as traditionally feminine.
AI-tocracy (28:52.029)
We've talked a bit about the sexism and sexualization of robots. To get back to race and specifically in this book, you talk about the history of slavery and how that interplays with robotics. And I imagine that's not necessarily one of the more controversial areas of the book, but I do imagine that you're gonna have some fun conversations when the book comes out around this topic. Could you talk about that?
Tom Williams (29:08.753)
Mm-hmm.
Tom Williams (29:16.198)
Yeah.
Yeah, I know. mean, it continues to be in ways that are very frustrating. An example I'll give from a couple of days ago is so me and a historian have a paper that we had under review, which is talking about the historical issues that robotic students should be aware of. Right. And it was drawing on the book. It was drawing on his history dissertation.
And when we were talking about like, we talked about Asimov, right? And so I'll give two examples. So we talked about Asimov and we talked about Engelberger. And with Asimov, we talked about like the ways that it, you know, drew on this past history and then the ways that his, like Asimov's laws of robotics, some scholars have analyzed as like a computational slave code, which is, I think justified from a historical perspective as well, because Asimov's characters,
in his stories referred to the laws of robotics as instilling slave complexes. And because Asimov credits the laws themselves, not to himself, but to his editor, Campbell, who was a notorious apologist for slavery. There used to be a Campbell Science Fiction Award that has now been renamed because he was such a noted apologist and proponent, and he saying we should go back to the slave system.
So that was on Asimov. On Engelberger, the engineer whose work was inspired by Asimov, Engelberger has been quoted a number of times as being very clear about his motivations and the connection between robots and slavery in his eyes. He has a quote, for example, where he said, well, we are not allowed to have slaves or kick around black people anymore. So the robot serves that purpose. Extremely explicit, right? Very explicit.
AI-tocracy (31:11.273)
Whoa.
Tom Williams (31:14.853)
both in connecting slavery, race and robotics and in aligning his own perspective with the former slave owners, right? This is very similar to Ruhab Benjamin has this example in Race After Technology of this advertisement or this popular science story from the 1950s, which says in 1865, Abe Lincoln freed the slaves, but in 1965, slavery will be here to stay. We'll all have personal slaves again, but we won't fight a civil war over that.
framing robotics, which I think is also has the same pattern, right? We will all have, we'll all have slaves again, right? Again, positioning themselves with the slave owners, right? And so Engelberger, and this is, this is not alone, right? There's a number of quotes like this from, from Engelberger where he makes this, this connection and talks about the connection with slavery. One of the reviewers we had, that we got this, this week of the paper was like, this is, this is unfair.
I interacted with Engelberger in the 1980s and he never said anything racist to me. Right? And so wild, wild, right? And so like these connections are very like historically documented, but there are still even today, lots of roboticists and especially like the older wave of roboticists are very hesitant to acknowledge these connections.
AI-tocracy (32:18.674)
Mm-hmm.
Tom Williams (32:42.993)
in part maybe due to hero worship and because like a lot of you know a lot of people in in robotics and a lot of nerds maybe grow up reading Asimov stories and you know or you get excited about them and of course then it's uncomfortable to to show some of these these historical connections so yeah
AI-tocracy (32:50.556)
Mm. Mm-hmm.
AI-tocracy (33:05.276)
Well, it's what you mentioned as well towards the beginning of the interview where you were like, yeah, these science fiction stories that a lot of engineers grow, like that are the motivation for them doing the work in the first place to have to question them within a dicey or completely like unjust, right? Political context is, I imagine, really difficult for their identity.
Tom Williams (33:26.885)
Yeah. Although I will say that it's something that's been great to see is like differences, general, generationally in engagement with these were general, generally, not always, but generally, the graduate students who I mean, when I give talks on this, like the graduate students are more or less all on board. Right. and I think some of that is just a heightened, social consciousness among like the new generation of
young people and roboticists, right? Yeah. And to some extent, might also be that there's been less time in the formation of the engineering identity to make it difficult to reconsider.
AI-tocracy (33:57.161)
Kids these days.
AI-tocracy (34:13.892)
I want to stay on slavery for one more second, just because I think it's interesting to think through. When you're talking about slavery, is there also a part of this, because the US history obviously is rife with like chattel slavery and with the actual movement of bodies, not just the metaphor of slavery, like you're saying, like black bodies being abused. And so is that connected, like the actual physical?
Tom Williams (34:29.403)
Mm-hmm.
AI-tocracy (34:41.832)
Like chattel slavery. Are you bring that line through?
Tom Williams (34:45.329)
Yes, yes. So specifically talking about that context. I mean, this isn't to say that it's only a US thing, right? And sometimes people, I remember giving talks on this in the Netherlands and the Dutch scientists saying, like this is just an American thing. Like race isn't a problem in the Netherlands. Slavery is American history, which is obviously very ahistorical, right?
But I think there's something very specific about the history in the United States of chattel slavery and the... Not to say that it was completely abolished, but at least on the face abolition of slavery that gave rise to these concerns that spawned early robotics in science fiction and in engineering.
AI-tocracy (35:41.384)
Mm-hmm.
Tom Williams (35:41.997)
It is specific to that particular form. Today, it's complicated. The domains in which we are seeing, often it's a little bit less literal. If we're seeing robots deployed into factories, for example, it is maybe more connected to exploitation of workers as opposed to specifically slavery there.
However, it's often still in racialized ways. So when Tesla's optimist robot came out, where they had the PR event a couple of months ago, in the aftermath, there was on Twitter, or X, there were a number of influencers who were, especially like South African influencers, who were posting about it, specifically envisioning it as a replacement for
black female domestic labor in their contexts. So it's still wrapped up in labor in a way that is often explicitly racialized. The other thing I'll say there, just in terms of, like, I will acknowledge that like, to some extent it is a global problem. To some extent things are different in different countries. So in Japan, for example, after Rossum's Universal Robots,
wildly different reactions in the United States versus in Japan. So in the United States, Rossum's Universal Robots came out that was depicting robots as workers and in semi-racialized senses, like there's an abolitionist character in RUR, et cetera. And in the United States, like engineers doubled down, right? You had like the Westinghouse Corporation had a robot that they released at one of their events that was designing the image of a black sharecropper.
And they invited people to like try to shoot, like, you know, shoot at it and to shoot, shoot an apple off of its head. like, you know, inviting people to inflict violence on this, this racialized robot form. Whereas in Japan, the engineers in Japan saw RUR and set, and explicitly said, we don't want that to be the future of robotics here. We want to design robots that are friends rather than tools. and so they created like the first like robot that they designed in the aftermath there.
Tom Williams (38:04.109)
It was intentionally designed not to perform tasks. It was designed for social interaction, right? And that shaped the evolution of humanoid robots in Japan. And that's not to say that none of these patterns appear in Japan today, but there are some fundamentally different historical through lines in different locations. And so this specific history originating in sort of the history of slavery in the United States is most relevant in the United States. yeah.
I'll give you that caveat.
AI-tocracy (38:35.131)
Yeah, no, I think that's really helpful. I had a personal question for you, so take it where you want to go. I'm wondering for you, and I'm not making any assumptions about your identity, but I think I've heard you identify as a white man before, and I'm wondering what it was like for you to write this history, especially invoking slavery and black slavery specifically.
Tom Williams (38:39.653)
Yeah, sure.
Tom Williams (39:00.175)
Yeah, I think that, I think it's important for, and this is something that like when I was reading Patricia Hill Collins's work on Black Feminist Thought, underscored the, explicitly underscores in the book, the importance of white men participating in and helping to, doing this type of work, because it's important for everybody to be doing so that it doesn't, and to counteract the assumption.
that otherwise sometimes forms that engaging in like DEI topics is the domain of people who identify, you know, who have marginalized identities, which is then used to sort of undercut the work. It's used to undercut those scholars and so forth. And a lot of this writing came out of, was inspired by, I participated in the 3C program out of Duke, which is the Cultural Competence in Computing.
program, which was training for computer science educators in creating courses that focus on these topics. And based off of that, I taught a course at Mines on race and computing, which then the designing that course, teaching that course, having the conversation that came out of that course helped to push me in this direction. And I think that it is
especially the issues of race, think are important to write about right now because there's almost nothing, almost nothing in robotics that's written on that. And I think we have the human robot interaction community is much better about talking about gender. Even there, it has evolved a lot in the last like 10 years, 15 years, where in like 15 years ago, it was a lot of like, we're going to look at gender because maybe men and women think about gender, you know,
robot gender differently, which tended towards like biological essentialism in some problematic ways. Whereas today it's gotten much better and it's like, how can we bring in feminist perspectives into the design of robots and so forth. But there's very little other than the work of maybe Rob Sparrow, Christoph Bartneck, Diaria Bryant. So there are more people now who are working in issues of race.
Tom Williams (41:26.363)
but it's still sort of on, you you can count the number of people who are writing about these things on, you know, a hand and a half.
AI-tocracy (41:33.756)
Yeah, well, so I imagine that this book is for more of a general population and maybe also academics in the HRI community. I'm wondering if, with that said, you could speak to designers or for folks, say, in Silicon Valley where there's billions of money, like, billions of money, that's where I'm at today, billions of dollars going into the development of robots.
When you're talking to developers or people in industry who are designing robots, is there something that you recommend for how they can operationalize this history?
Tom Williams (42:08.751)
Hmm. Yeah. Yeah. So, so for context, yeah, the book is it's an academic book, but my intention is for it to be highly accessible because the place, the people who I really want to read it, like, yes, academics within my community, but really like the next generation of robot designers who might be taking a tech ethics class, right? and maybe hopefully it would be like assigned a chapter, out of it. And that means tech ethics students means could mean like,
anyone who's a senior in college or enrolled in graduate school, right? Which means it's going to be more broadly accessible. I didn't want people to have to have any type of robotics knowledge to engage with the book. And so, I'm sorry, what was the question? Right, yeah, so in terms of recommendations then, I...
AI-tocracy (42:55.292)
it was around industry and developers?
Tom Williams (43:01.583)
the each of the main chapters of the book is basically broken up into three parts. I talk about the history and sort of where we got to where we are today. I talk about then like considerations today. and, the sort of ethical issues today we have with robots, in each of these domains of power based off of that history. And then I talk about how we move forward. And I, when I'm talking about how to move forward, I try to include a couple different
options of different fundamentally different types of perspectives ways that we might just Revise How we are like continuing to work within the same paradigm but sort of shifting our perspective to try to avoid maybe the most egregious outcomes More fundamentally how we might reimagine and re-envision our approach
by questioning some of the default assumptions while still remaining sort of connected to the current approach. And then finally, approaches that more fundamentally rip up the status quo by looking back at the history that we had talked about and saying, what if we rethink that? If history had gone a different way, or if we weren't making those core assumptions, where might we go that's differently?
And I don't think I try to emphasize that there's no silver bullet in most of these domains. So, So like with robot appearance design, it's very complicated, right? You, on the one hand, you know, you might say, well, the problem is that so many of our robots today are designed in these images of either white men in certain contexts or white women in other contexts. Maybe we need more diverse robots.
But the problem there is if you're saying, well, does that mean that we're going to be designing more robots in the image of people of color? Which is then problematic when these are robots that are being designed again, because the history of robot imaginary are being put into domains where they're acting as servants. So that's a problem. And so to some extent, there it depends on who's designing it, where we see this with AAVE in robots. I've been talking a lot about robot appearance.
Tom Williams (45:25.253)
But lot of this is also about like what a robot says, the words it uses, its prosody, et cetera. And there are, there have been a number of people who have been using like robots speaking in AAVE, African American Vernacular English as a, like a decolonialist design method, right. Or as a anti-racist design method. Right. However, last week,
there was a, I don't remember right now, but like there was a news story about a tech company that was having its like voice assistant speak in AAV and the people on the internet were all up in arms, right? Because it was perceived as that company and probably predominantly white and Asian engineers pursuing that in a way that some people were interpreting more as like blackface, right? As like a digitalized blackface. So it depends on
AI-tocracy (46:18.916)
Yeah.
Tom Williams (46:22.779)
who is not just like what the design solution looks like, but who the designers are and what their motivation is for that approach. And so I try to emphasize in the book, like the space of different solutions, but also what the problems with those solutions are, and also emphasize the need for non-technical solutions. It isn't just about what the robots look like, it's about who we're designing, sorry, who we are hiring to put on our robot design teams.
what domains we are putting robots into and why, whether robots are really the right solution in the first place. Yeah.
AI-tocracy (46:57.946)
Yeah, well, that's been a question that's been in my mind as we've been talking, is do we just take as many identity markers or maybe visual identity markers or it sounds like audio out of the robot? So I'm gonna assume that we're gonna keep having humanoid robots. Should we just like asexualize agender, aracialize robots?
Tom Williams (47:07.302)
Mm-hmm.
Tom Williams (47:13.829)
Yeah.
Tom Williams (47:18.051)
Uh-huh. Well, so that's challenging. And so some people have proposed that, but the problem with that is if you try to strip the identity characteristics away, then everything else just flows in to fill the gaps, right? So if you've got, you can imagine, right? If you've got on your phone, right? If you've got a voice assistant, right? Your phone does not have identity characteristics, but if you hear the voice, right? You immediately racialize and gender the voice. And people have tried to make agender voices. It's very difficult.
But even then, you have a robot that has a completely non-human-like form, and we're talking about interactive robots primarily here, so let's say it's still talking, but it's talking in a way that doesn't have clear characteristics of race and gender or class or ethnicity in them, but it's put into a domain that is traditionally associated with a particular identity group, then people are still going to associate it
with that identity group, right? And so some of the alternatives there would be like cross-stereotypic design, where you give it clear identity characteristics, but ones that run counter to the domain, hopefully in a way that's more sensitively designed than like the space robotics example I had before. Or my lab has been examining things like agender design, but...
in a way where we're not necessarily trying to strip out identity characteristics, but instead using agender identity as a design characteristic. So I had a student in my lab who was non-binary, so they were interested, he they pronouns, they were interested in exploring trans and non-binary perspectives on robots. So they interviewed,
a number of trans and non-binary users. They showed them these robot designs that they'd used where they had a robot with a not super, you know, not clearly gendered form. It was speaking in a not super gendered voice. And it described itself as, or themselves, as using they've them pronouns, right? And they presented as,
Tom Williams (49:38.425)
not agenderless, but as a character with an agender identity. And they got the user's perspectives on this. And it was really interesting seeing both the pros and the cons, right? Where the trans and non-binary users that he talked to were really interested in it and were really excited about the design and said they perceived it as an agender character. They saw it as helping to push back on agender, on stereotypes or helping to expose people.
to non-binary identities that otherwise they might not be exposed to. But on the other hand, they pointed out things like, yeah, rainbow capitalism. Again, like if Google said they were coming out with the first like agender robot, and that's what you hear about, like you would have some concerns, right? And then they also highlighted that it depends on what the space of robots are. If all the robots are like that, and it then means that people are more likely to encounter an agender robot than an agender human,
AI-tocracy (50:20.858)
Yeah.
Tom Williams (50:36.505)
it might have the effect of dehumanizing real people. so those users who talked about that, the sort of takeaway was this could be really cool if it's used in the space of other robot designs.
AI-tocracy (50:52.602)
Yeah, so I'm not entirely sure how to ask this question. I'm just thinking about, as we're talking about this subject of social justice and robotics in the current political climate that we're in. And so we don't need to go too hard into partisan politics one way or another, but I am wondering for you as an author navigating getting this book out in the world that does mention social justice, and I guess as an academic too,
Tom Williams (50:58.319)
huh.
Tom Williams (51:12.465)
Mm-hmm.
AI-tocracy (51:21.446)
How are you thinking about that?
Tom Williams (51:24.783)
Yeah, I mean, I think it's challenging because we are at a moment in time when educators might be worried for good reason to increase the amount of content of this type in their classes, especially like in my robot ethics class, right? There's a strong social justice component in it. And it tends to be a class that like,
is very polarizing where like three quarters of the class loves it and the other quarter of the class does not. And either for two reasons, either because they're being asked to read and write extensively for the first time in their engineering curriculum or because they're like, what is this woke BS? Right. And those students are giving faculty evaluations. They might be complaining to the school. Their parents might be, you know, they might, they might be sending you to like right rank.
AI-tocracy (52:06.598)
you
Tom Williams (52:23.683)
right wing watch or whatever, the professor watch lists, right? And so I think there are reasons right now where people are justifiably worried about maybe their personal safety for talking about these types of things. On the other hand, I think for those exact same reasons, we're at a moment in time when it's more important than ever to be talking about these issues and making sure that we are educating the next generation of engineers about these
these issues of how what they're doing actually interact with society and it is not just informed by but re-info- but feeds back and reinforces societal trends that are related in deep ways to the everything that's going on politically right now so it's a complicated time but in some ways i am glad that it's coming at a time when these issues are when
many students, it isn't difficult for most students to see the relevance and why these issues are relevant right now when those same students are getting freaked out by DEI efforts being cut at universities or being hidden or what have you. And so I think it's really important at this moment in time.
AI-tocracy (53:44.209)
Yeah, as we move towards closing, unfortunately, I do want to know if you want to highlight anything else that we haven't covered from the book. I know there's a lot in the book. I know you talk about surveillance a little bit, but is there any like big take home that you'd want to highlight?
Tom Williams (53:58.671)
Hmm. Yeah. So the one, the one thing we, we haven't talked a lot about is policing. And this is, this is also in, it's the last chap, like main chapter of the book before the conclusion, but it's also in some ways what sort of pushed the writing of the book in the first place, where like in 2020, in the aftermath of like George Floyd and we're in lockdown,
in the pandemic and from my balcony in Denver, armored, like SWAT, know, tanks going through the streets. There's people getting tear gassed and shot out by police, blocks from my apartment. And in that moment, I got really concerned with what was going on with robotics and the police.
And we launched this no justice, no robotics, no justice, no robots campaign to get people to, a roboticist who agree not to work with the police. And that sort of pushed, you know, a lot of my work in this direction, but also later cause reflection on, well, what can we do beyond. Petition campaigns, right? To make practical, to have practical and real impacts in the ways that our roboticists are designing and deploying robots. And so.
at the end of the book I'd sort of come full circle with that and also end with an abolitionist perspective and argue strongly for an abolitionist perspective in robotics. So I guess that would be the thing that maybe we hadn't talked about so much, but that I think is really important in the context of
AI-tocracy (55:40.262)
Absolutely. And as we ask everyone, is there, and again, this is blatantly stolen from Ezra Klein, is there a book, movie, music, other piece of media that you would recommend to someone?
Tom Williams (55:53.221)
Well, I'm going to give two answers. The first is that I think I'm contractually obligated to say my book is relevant, Degrees of Freedom on Robotics and Social Justice from MIT Press. It'll be out this late fall this year. It's available for pre-order right now, but it's expensive. And the reason for that is because when it comes out, it'll also be available for free. So anyone will be able to read it for free.
for with a PDF from the MIT Press website. And so if you go to my website, which is tomwilliams.phd, there's a link to preorder, but there's also a link to sign up for updates. And so I have a newsletter where I'm sharing information about the book and its evolution, and which I will also be using to help get that link to the free PDF into as many students' hands as possible. The other book I would recommend that is not mine.
I've got lots of books here to choose from, although the ways that this is organized is the top shelf is all books I have not read and everything below that is fiction that I have read, whereas all of the non-fiction I've read...
AI-tocracy (57:00.272)
So for listeners who are not seeing the video, Tom has a massive bookcase, bookcases perhaps, behind him right now. It's a very impressive library.
Tom Williams (57:08.345)
Yes, behind me, yeah. Thank you. But these are all fiction I've read, all the non-fiction I've read goes to work. So I don't have any good fiction books to recommend, or non-fiction books to recommend with me, but that's a good excuse for me to highlight some amazing fiction. The book I will recommend...
which is my favorite book that I read last year, Babel by RF Quang, which is a fantasy novel set at Oxford. And it is about the ways that academics participate in colonialism, the ways that the act of translation does violence through what it chooses to include and what it leaves out. And...
Personally, as somebody who works in robotics and AI and natural language understanding and generation, I found a lot of it really to be about the ways that, or my reading of it, tied a lot to the ways that natural language technologies.
participate in those dynamics, serve as tools of oppression, reinforce colonial dynamics. So a fantastic book of fiction, but also really politically relevant right now.
AI-tocracy (58:30.048)
As we close, I'm going to give that one last pick up degrees of freedom on robotics and social justice this fall. It's out through MIT Press and it will be free when it is out. So please do put that at least on your list. Subscribe to the update feed. Tom, one more time, what is that link or where they can find it?
Tom Williams (58:43.087)
It will.
Tom Williams (58:54.929)
The easiest way is to go to tomwilliams.phd and there's a link that says get book updates, but you can find it directly with all of the previous posts at tomwilliamsphd.beehiv.com
AI-tocracy (59:12.282)
Wonderful, Tom, thank you so much for being on the show today and for the awesome scholarship that you're doing.
Tom Williams (59:18.203)
Thank you for having me.