AI for All Tomorrows

What's up with all of these chatbots impersonating our dead loved ones?

In this conversation, we talk with Tomasz Hollanek discusses the concept of digital immortality, exploring its implications in the context of grief, technology, and ethics. He highlights the paradox of seeking to escape our material existence through digital means while acknowledging the material impact of technology. The discussion delves into the emergence of grief bots, the ethical considerations surrounding data donation, and the regulatory landscape for AI technologies. Hollanek emphasizes the importance of consent and the potential risks associated with these technologies, particularly for vulnerable populations such as children. 

In this conversation, Tomasz Hollanek discusses the intersection of technology, ethics, and cultural perspectives on digital immortality. He explores how technology can preserve legacies, the ethical implications of post-mortem avatars, and the importance of cultural sensitivity in designing digital afterlife technologies. The conversation also delves into the language used in this field, the trends in AI, and the varying attitudes towards death across different cultures. Hollanek emphasizes the need for ethical considerations and the potential for technology to enhance our understanding of mortality.

What is AI for All Tomorrows?

In an uncertain world where AI technology is disconnecting us more and more each day, we are dreaming of a world where technology connects us. The premise of this podcast of simple: let's talk to people and imagine the most hopeful and idealistic futures for technologies that connect and do good in the world.

AI-tocracy (01:11.917)
Welcome to AITocracy, AITocracy Live. And we're here with Tomas Holoneck, who is a researcher over at the University of Cambridge. We're really excited to have Tomas here and we're talking about digital immortality. What the heck is that? What do we do with it as people in this weird world of AI that we've found ourselves in? And really some adjacent topics, maybe about safety, maybe about impact.

But yeah, we're focused on digital immortality today and Tomas as we've been starting What's something inspirational to you that's going on and either your professional personal life or out in the world right now?

Tomasz Hollanek (01:53.656)
Thanks so much, Dylan. So when you you asked me this question in preparation, I've been thinking about something that I can that I can mention that makes me feel hopeful, but at the same time that something that is grounded in in things that are happening in the world right now, but maybe not AI related, actually. And something that very quickly came to my mind was the work of a friend, actually, Dariad Simbaliuk, who is a Ukrainian researcher, now an assistant professor at the University of Chicago. And her recent book,

titled Ecosite in Ukraine, the environmental costs of Russia's war. Now the reason why I thought this was something that I wanted to mention was that, well, Daria really brings up the question of environmental costs of the war in Ukraine, the impact of the war on a more than human world.

And in a way she helps us expand an understanding of grief. for this book she interviewed many people in Ukraine who were talking to her about the experience of what it means to lose specific environments. So it's not just about the environmental question, but it is also about how we see ourselves in relation to nature. So this is not necessarily about decentering the very human and very direct impact of the war on the people of Ukraine, but actually it's also about reframing the conversation

to consider how we, ourselves, understand ourselves through our relationship with others and with the natural world. And, you know, it might not sound very relevant to the question of digital immortality, but beyond this question of grief and expending our idea of grief and how we might be grieving not only humans, not only family members, but these broader communities and also broader environments that might be dying for multiple reasons, including as the result of

of in this case the war in Ukraine.

Tomasz Hollanek (03:48.15)
I think it also, the reason why I thought I would highlight this work now is just, and why I think it's inspirational is just that it helps us reframe some of the conversations that very much focus on our ego, ourselves, the preservation of the selves in the context of digital immortality and how we might need to think about various questions, including technological questions in relation to these broader impacts and broader...

sets of interdependencies that really constitute our life.

AI-tocracy (04:22.7)
Yeah, yeah, it's set of questions that I've been thinking a lot about in terms of death and grief. This is a personal conversation for me because I've done some research on in this topic of digital immortality and it means something to me and we can get into that, but we're here to talk with you about this topic. And right now I've also fallen into this weird world of artificial general intelligence and that community that's very pro-AGI. And so I imagine

Tomasz Hollanek (04:48.91)
Yeah.

AI-tocracy (04:52.14)
when some listeners hear digital immortality, might be where their mind goes. What do you mean? And what are we talking about here?

Tomasz Hollanek (05:00.238)
Hmm.

Well, of course there is something paradoxical inscribed in the conception of digital immortality that is very often when you mention digital immortality, people start thinking about mind uploading and other ways in which we might want to or could preserve ourselves thanks to technological development. So if you mentioned this community of techno optimists, techno futurists, definitely that's something that comes up often, the idea that technology can help us preserve ourselves in new ways.

course in that idea of the Jatua mortality there is this idea that know the the concept behind it is really is that we can sort of

we can move past our material conditions, can escape the body. There is something essential to us that we can preserve to escape the very material nature of our existence. And of course, this is why I think Daria's work is so interesting to put in conversation with this discussion of digital immortality, because digital immortality and AI in general is very much a material technology with a material impact on people.

around the world, we can think of the extraction of minerals that are used to produce chips that enable these new forms of technologization. I would say that digital immortality in the broadest sense is this paradoxical relationship between our aspiration to dematerialize, to escape the material, but at the same time the very material nature of any technology.

Tomasz Hollanek (06:40.672)
kind of bringing this back to earth in some sense. Digital immortality is really about two things. On the one hand, there are two sets of companies that emerging as part of the so-called digital afterlife industry. We see companies that offer for individuals to preserve themselves, as I mentioned, so different ways in which you can record, store more and more data that would maybe then allow you to

pass on an avatar of yourself to future generations that would be as accurate as possible. So that's one dimension of this industry and the other dimension...

is really about resurrecting quote-unquote people that you lost. And of course in relation to both of these sort of segments of the industry there is the question of both preserving and potentially resurrecting again quote-unquote historical figures, famous people, so there is this question of legacy both on the individual level and how new technologies including AI can reshape the way we preserve ourselves or

reconnect with our loved ones, but also this question of how we reshape our legacy on a societal scale and what does that mean for memorialization, for instance.

AI-tocracy (08:01.879)
Can you give an example of say what these chat bots in particular, I imagine you're talking about chat bots, what we might call grief bots, but just to ground it a little bit more, like what are these things, how are they made, and who might be using them?

Tomasz Hollanek (08:19.872)
Sure. So, thinking of the sort of the latter, the latter category, so the category of people thinking how they can reconnect with a loved one or someone they lost that they would like to reconnect with, the way this is done is basically now with the possibilities offered by generative AI and LLMs, large language models, it is relatively easy to mimic someone's style when they speak or write.

using a pretty small sample of data. basically all you need is some digital footprint of someone who was or indeed still is alive and you can use that footprint to essentially create a digital replica of that person. Some sort of replica that would be able to mimic and sometimes very accurately mimic a person's style, the way they speak, the way they think potentially.

And so there are specific products that already offer this. I wouldn't maybe want to advertise any specific ones, but I think it's also fair to say that some of the systems that we are very familiar with, whether you have encountered some of these grief bots or dead bots or not, services such as ChatGPT, they can also be used to an extent to generate something similar.

AI-tocracy (09:22.807)
Sure.

AI-tocracy (09:40.174)
Yeah, no, I feel you on the advertisement portion whenever I talk about this topic. So for listeners who like, I have done research as part of Dr. Jed Brubaker's lab over at the University of Colorado Boulder. So very on the academic side on different ways that we can deploy chatbots, study chatbots around this question of digital immortality.

And for me, I come from a ministry background, so I have that reverend title and I'm really interested in some of the bigger questions of ethics around these. But when I talk about this, I'm always scared to say, hey, this company over here, and everyone seems to want to know, where should I go to go do this in the best way possible or the most ethical way possible? I don't wanna, I don't wanna necessarily a sponsorship from some of.

Tomasz Hollanek (10:34.615)
Yeah.

AI-tocracy (10:35.021)
from some of those companies, not for any reason, but just sponsorship is a sacred thing. I am curious though, if we want to start wading into some of those ethical concerns, because I imagine when some people hear this digital immortality thing, it's weird, it's new, it's scary, perhaps. What's your take on this technology? Do you take like an ethical stance one way or another?

Tomasz Hollanek (10:54.328)
Yeah.

Tomasz Hollanek (11:03.118)
Yeah, I mean, okay, so maybe the best way to start would be to say that to assess, consider the ethical implications of any technology, but in particular this technology, we really have to think about the technology in relational terms. So we're not talking about the impact on an individual, on their grieving process alone. We have to think about this person in relation to others and in general relationships between people. So...

way to consider the risks that are associated with the development of these technologies would be to think about several stakeholder groups that we really need to think about to identify these risks. So the first category that we, with my colleague Kasia Nowaczek-Basińska, also a researcher at the University of Cambridge,

So someone that I have been collaborating on this topic for over two years now. So when we are asked about the risks, we usually point to these three stakeholder groups. The first one is what we call data donors. So the people whose data can be used to generate a deadbot or griefbot. Then we need to think about data recipients. So the people who are in possession of the kind of data that can be used to resurrect quote unquote

And then finally we also talk about data interactants. So those who are meant to interact with the end product. And of course sometimes these groups overlap, but not always.

And so it is through the relationship between these different groups that we can identify the biggest risks. When we think about the data donors, more specifically, mean, definitely the first thing that comes to mind is the potential violation of a dead person's dignity or a violation of their post-mortem privacy, which...

Tomasz Hollanek (12:55.726)
So this might mean, know, when we're talking about dignity, we can imagine how some systems might consider how they can use a dead person's image to sell products to the relatives who are using a specific grief bot or dead

Under some jurisdictions, it might be seen as problematic and potentially prohibited. But in many ways, because this industry is growing, it's new, we don't really know how specific laws would apply to these specific products. I'm thinking, for instance, of the Digital Services Act in the EU that does try to sort of steer the industry in the direction where also the AI act in the EU as well, in the direction away from manipulation, the use of AI.

and platforms very broadly to manipulate users. But when it comes to these specific services, we don't really know how these laws would be interpreted. Not yet. So this is one... Yeah. Yeah?

AI-tocracy (13:52.749)
Before going into the other two categories, and we'll also, definitely will go into the EU versus US, but so listeners understand, this is increasingly becoming like a million dollar or multimillion dollar industry. There is a lot of money in Silicon Valley and beyond pouring into this, which is very interesting in other ways. But for data donors, because I think that can be a term of art.

Tomasz Hollanek (14:14.316)
Yeah.

AI-tocracy (14:17.95)
in our circles, can you just give a more grounded example of who a data donor would be in, say, building out a chatbot that is, in quote, resurrecting someone?

Tomasz Hollanek (14:27.406)
Yeah. Well, in some sense, we are all data donors. That is to say, all, or at least most of us at this point, have some form of digital footprint online. So traces of our activity online, traces of our activity on social media in particular.

In other words, the kind of data that could be used to create a chatbot that looks like us or behaves like us, speaks like us, writes like us. So in some sense, we could all be data donors.

So I could be a data donor, that is to say someone who willing, the reason why we use these terms donor and recipient is of course, to kind of gesture towards the idea of organ donation and the fact that consent is central to the use of a person's organs.

ethically and responsibly. And so the use of these terms, which might sound vague to some people, is actually very purposeful. And it is because we want to center the idea of consent, consent of anyone whose data might be used in this context that sometimes is violated. And of course,

There are researchers, academics, philosophers who argue that actually, so this is Luciano Floridio and Carl Uman for instance, who argue that the use of a person's data or what they refer to as the informational body or digital remains should never be used for commercial purposes without the person's explicit consent. I find that idea a little problematic because...

Tomasz Hollanek (16:18.952)
know, relationships are complex and we maybe don't want companies to try to regulate or, yeah, to regulate our intimate relationships with, with our, with our, you with our family members, some of whom might no longer be here, but definitely the question of

You know, making whoever is in possession of the kind of data that can be used to create a chatbot of someone, making them aware of the fact that the consent of that person is important is, think, something that companies could be trying to point to when they guide you through the process of creating a deadbot.

AI-tocracy (17:00.502)
So that's this data donation and it sounds like issues of consent. There were two other areas of risk that you wanted to go into before I ruthlessly cut you off to specify there. So what are those other two areas?

Tomasz Hollanek (17:09.004)
Hmm. Sharp. No, no, no, that's good.

Tomasz Hollanek (17:16.142)
I mean, could be talking, so data recipients are, as I mentioned, the people who are in possession of the kind of data that can be used to resurrect someone or recreate them with AI. And very often,

These people are also the same. So they are also meant to interact with the end product, but not always. But when we're thinking about those who are meant to interact with the end product, so service interactants or what you would normally refer to as end users of these products, of course, something that has been highlighted multiple times is the impact on the psychological wellbeing of these users. So we can think about the potential prolongation

of grief when these services are used by someone who is grieving, but also the question of manipulation, something I already mentioned in reference to data donors. So definitely the potential for manipulation is huge. this is such a... So we're talking about, potentially we're talking about people in a very vulnerable state when it comes to grief.

Now, but really our concerns are not only related to people who are grieving and actually this is why I usually refer to dead bots not grief bots because we could be talking about someone who has sort of You could say moved on but still the the potential for manipulation is huge just because you're using an image of someone who was important in your life potentially to sway you and you know one way or the other maybe to To buy something but maybe to vote for someone so there is definitely that question

And finally, when we're talking about this group of service interactants and vulnerabilities, we have to think more specifically about what kinds of groups might be, we might need to prevent from interacting with these systems. So something that we have highlighted in our work with Kasha is definitely children. It's really fascinating to see how something that we have been warning against for a while now, how we don't really know what happens when children

Tomasz Hollanek (19:28.976)
start using these services and we were sort of highlighting this specific case study as a potentially very negative scenario and we now see it playing out in real life. So we now see services that actually use this example. If you're a parent and you're terminally ill, why not preserve yourself for your child so that they can interact with you in this new way after you pass away while they're still growing up?

And you know, this is again using, this is building on people's vulnerability, in this case, someone who is dying and wants to sort of, wants to do something right for their kids or whoever else is left behind. But in this case, we're talking about children as this very special relationship with parents. And so...

The idea that you would be using this with children I find particularly problematic because we don't have any studies that would confirm that the use of these services might be beneficial to children, right? And as a researcher, you know that it would be very difficult to get that kind of study approved to actually show that yes, for some groups under certain conditions,

the use of such services could be beneficial. So our point here with Kasia really is that until we know for sure that there are some material and identifiable benefits children and...

but also other vulnerable groups should be prevented from using these services. There should be some restrictions. And other groups that we can very easily think of when it comes to this potential for misconduct and bigger risks, like people with Alzheimer's and other forms of dementia, and how we might need extra protections to ensure that specific...

Tomasz Hollanek (21:25.282)
Groups of users with specific types of vulnerabilities are not negatively impacted by the growth of this industry.

AI-tocracy (21:34.805)
Yeah, you mentioned regulation and maybe this is a good time to talk about that a little bit, which is an area that I'm very curious about from your perspective as someone who is working and researching in the EU. How would you, maybe this is a big question, but how would you describe the landscape of regulation for some of this technology that we've talked about?

Tomasz Hollanek (21:48.27)
Hmm.

AI-tocracy (22:01.394)
And again, I have some thoughts and maybe more of a US perspective that I'm happy to share, but I am curious from your perspective where we're at with regulation, because that seems like a real ethical concern, say for children, the use of this technology by children that you're mentioning.

Tomasz Hollanek (22:01.774)
Mmm.

Tomasz Hollanek (22:13.644)
Sure.

Tomasz Hollanek (22:17.454)
So the landscape in the EU is interesting and it's evolving and it's probably the first, so the EU is the first jurisdiction in which we have seen like very, very specific and the kind of regulation that is very broad in its scope. targeting AI, so the AI act that...

is now almost in full effect. And parts of it have been introduced in August last year. So the AI Act is structured around the idea of this risk categorization. So for different AI systems posing different levels of risk, there are different requirements. So we can start with prohibited risk. So there are certain types of AI systems that according to the regulators in the EU should be

prohibited under any circumstances. So this for instance might include real-time facial recognition technologies and there are big differences between different jurisdictions when it comes to this specific use case. For instance in the UK many of these technologies are used by the police and they're still allowed while in the EU they might be prohibited.

Another example of a prohibited technology is the use of AI for social scoring. And another is actually the use of AI to manipulate people.

But this is what I was trying to sort of gesture towards earlier where, you know, it's great that the EU made it clear that manipulation is not something desirable. Indeed, it should be something prohibited. But when it comes to specifically outlining what they mean by manipulation, this is where it gets a little trickier. And because I mean, I haven't really prepared to talk about the AI act, but there is a recital that

Tomasz Hollanek (24:22.348)
makes an attempt at making this specific point about prohibited systems more specific, but actually is making it even more vague when it comes to the kinds of use cases that we're discussing right now. So the use of AI to, for instance, help people cope with grief. Because the EU makes this...

So on the one hand manipulation is prohibited, but on the other hand, if certain procedures that involve consent are included, they use of AI. So basically the prohibition should not apply to systems that might be used for health or mental health reasons. So this is where you see this huge potential loophole, right? For companies to use to say, well, I mean, we asked this user for consent, they consented.

They know what they were signing up for in theory. And so here you go. There is this amazing system that can help people cope with grief. And indeed this is why like when we're thinking about improving regulation, what we should be talking about is actually making this question of consent, how consent is elicited by companies and how it's...

how it's affirmed, that's the area that we should be talking more about. And this applies not only to grief bots or dead bots, but actually all systems, AI systems that are built on conversations with users where the AI system itself might represent itself to the human user as sort of like an anthropomorphic entity with traits that might elicit trust from the user.

AI-tocracy (26:04.49)
Yeah, no, that's that's helpful and I appreciate the perspective on the AI act specifically in the US context It's very much a free-for-all now like a lot of regulation around AI I think the regulation or legislation that it would most come under is Regulation around deep fakes and the potential harm of deep fakes But even that for this particular topic, I think is very in in process very early on in its cycle

Tomasz Hollanek (26:14.733)
Yeah.

Tomasz Hollanek (26:24.686)
Hmm.

AI-tocracy (26:33.598)
But I did get this question the other week where someone asked me, okay, so are we just talking about deep fakes? And I am curious from your perspective, is that the conversation or is there something else going on here when we're talking about digital immortality and resurrection and grief? Is it a different, should we think about it differently as everyday people, as researchers, as people who are trying to develop regulation?

Tomasz Hollanek (26:57.837)
Yeah.

Tomasz Hollanek (27:01.314)
Well, yeah, that's very interesting. maybe I'll start by saying that when we talk about deepfakes in general and issues related to deepfakes, the reason why we are potentially made alert is that we fear that these systems could convince someone that...

they're interacting with something that they're not actually interacting with. So for instance, might, well, it could be that you see some news story or a video of something that could have happened, but actually didn't. you start believing in something, While it never happened. And when it comes to deep fakes based on specific people, of course, the worry is that, whoever is interacting with the AI system might,

might build this misconception of what it is that they're interacting with. And by the way, this is why that you, AI Act, makes it very clear that...

actually all systems should ensure or rather providers of all systems should ensure that the users of the systems are aware of the fact that they're interacting with an AI system unless it's unless it's clear from the nature of the interaction itself which is an interesting element in that law itself but anyway um now the reason why i bring so so in response no we're not only talking about deep fakes the reason why is that the worry is not only that people might start thinking that they're actually talking

with their grandmother. We know from some early user studies that people are not necessarily seeking accuracy in these systems, right? But they still might have a very emotional reaction to something that sort of fills that void that someone left behind after they passed away. So this is to say that the manipulative potential of these systems is not related to their accuracy or how convincing they are in mimicking

Tomasz Hollanek (29:00.208)
making someone. And this applies also to the question of dignity, right? It might be that your, you know, like your grandma's selling you, I don't know, spaghetti through Uber Eats, right? It's not, it might not be even an accurate depiction of your grandma, but still the use of your grandma's data or image to basically, you know, use their image as a puppet for corporate interests is a problem anyway, whether it is convincingly

looking like your grandma or not. I would say this is where some questions when it comes to deepfakes of course overlap and indeed some mechanisms for protecting users and for protecting those whose data is used to create a deepfake or to create a deadbot. Some of these mechanisms might overlap and our thinking might overlap but in the end I think that actually we're talking about two different problems.

AI-tocracy (29:56.894)
Yeah, one thing I've been thinking a lot about related to what you just said is it is data donation. It's around like AI model development. So you say you accept the terms of service. You're dying. You accept the of service for your data to be used. You might not understand those terms of service. It might just be the little box in the corner. So you don't maybe fully consent to everything that's going to happen with your data. Your family then also accepts the terms of service to then use that.

Tomasz Hollanek (30:11.448)
Yeah.

Tomasz Hollanek (30:19.171)
Yeah.

AI-tocracy (30:26.481)
technology, again tacitly saying I consent to this. But I imagine that to make these dead bots better within a company or maybe with public, like open source, I'm not sure how that works, that this data is then used like many other models, like large language models, like chat GPT, right? It's just like you're constantly trying to tune that to be better. How is that functioning out in the wild right now?

Tomasz Hollanek (30:29.239)
Yeah.

Tomasz Hollanek (30:47.032)
Yeah.

AI-tocracy (30:55.85)
with companies. Is that fair to say that that data that there's a challenge or an ethical concern with that data being used to train models?

Tomasz Hollanek (31:04.846)
Yes, for sure. And this is why I think, and this is where, you know, the work that you have been doing with Jed and Jed works is really crucial to kind of tackling that dimension where, you know, we essentially, so, I mean, you would be probably better positioned to summarize this work. But in a nutshell, the way that what I find inspiring about this work that you've been doing there is really making central the question of what

happens after users die to the design process itself. sometimes companies fail not because fail to do this in an ethical way, not because they have bad intentions, but just because this was not something that they had on their radar. So of course we have seen amazing

interventions at Meta, so Facebook, but also Google, where there has been this thinking about, how do we make sure that people get some sort of say in what happens with their data after they pass away? Who is it that can manage that data after they die? But with, yeah, with these services, think there is, so I was asked a few months ago, and I,

Yeah, I'll try to be vague, I was so I was asked by so I was approached by a foundation that is managing the legacy of a Holocaust survivor who is still alive, by the way.

who is now a little over 100 years old. And the Foundation and the Holocaust Survivor are both thinking about how they could preserve this person's legacy in new formats. And they are also considering the use of AI and post-mortem avatars as a potential future direction.

Tomasz Hollanek (33:00.982)
And we were discussing how they can ensure that someone who doesn't really understand how the technology works, and most importantly, someone who doesn't understand how the technology might work in the future, meaningfully consents to this process. And something I would suggest that we should all be thinking about and trying to write it down, you know, make sure that our relatives are aware of our wishes is thinking not so much about what kinds of technologies we're fine with or not, because they might change, but

what contexts we're fine with and what contexts we're not fine with. So for instance, specifying that in this specific case, thinking about someone who wants to preserve their testimony for future generations, maybe specifying that the use of their data is only fine when it leads to a production of something that is...

that is displayed in a museum context, for instance, right? Or specifying that the use of their data to create an avatar of them is only fine if it's 100 % certain that the avatar would not produce any kind of statement that the actual person would not be okay with. Which right now, from a technical point of view, would be impossible when we're...

talking about the sort of mainstream idea of what you know a dead bot or grief bot is. Or rather it's very hard to say that there's like a 0 % chance of this happening. So that's why the use of post-mortem avatars, AI-driven post-mortem avatars when it comes to Holocaust survivors is very tricky, although as you know there are some really great...

experiments in this space as well. So yeah, I hope that kind of responds to your question.

AI-tocracy (34:54.02)
Yeah, it sounds like there are some smaller, say, startups who are being increasingly founded and funded to do this work. It also, or to develop this technology, I should say, it also sounds like there are some bigger players that are getting involved in funding this work. And this is something I've been trying to make sense of a little bit. But when you have, do you think about those differently?

Tomasz Hollanek (35:01.55)
Hmm.

Tomasz Hollanek (35:07.971)
Yeah.

Tomasz Hollanek (35:17.997)
Yeah.

AI-tocracy (35:20.761)
Where or what do you think about the some of those bigger, you know top four? Companies getting involved in either funding or being interested in this work in the first place

Tomasz Hollanek (35:21.453)
Yeah.

Tomasz Hollanek (35:32.716)
So paradoxically, actually think that bigger platforms might be, because they already are, scrutiny. We pay attention to what these companies are doing with our data. We pay attention to...

how they might already be manipulating different groups of users when it comes to, you know, like news consumption, etc. I actually think that in some sense the likelihood of them doing something wrong is smaller.

it's less likely that they would violate, at least this is my belief. And the other aspect that I think we need to point to when we're discussing the difference between smaller actors, startups and bigger companies is the question of platform temporality. This is something that Tamara Niece articulates really well in her book, The Death Glitch.

which is, you know, these companies come and go, so the smaller ones appear, they promise you immortality, and then two years later they disappear because their funding model doesn't work out in the real world. So in some sense, if our goal is to find new ways of preserving our legacies, and there are many...

potential positive scenarios for that. I actually think that smaller actors might be more, well, they might not be the best positioned to do this work well when this is the objective, preservation.

AI-tocracy (37:11.814)
Yeah, one quick follow-up. Why do you think that large companies are getting involved in this technology in the first place? For me, it's easier, if I wasn't familiar with the space, to think to myself, okay, startup companies, they can move quickly. There is a quote unquote quick buck to make here. But for these larger, yeah, for these larger actors, like,

Tomasz Hollanek (37:34.636)
Yeah.

AI-tocracy (37:40.025)
What are they after? Is it just, this is an interesting topic? Or is it, do you think that it is more like product focused where they want to build out dead bots for themselves, like for their consumers and users?

Tomasz Hollanek (37:53.134)
I think it's fair to say that these companies have done some thinking. We have evidence of that thinking, like they are thinking of what they could be doing with this space and thinking about debt bots, grief bots, but there's a mortality more broadly as well. At the same time, they have been extremely cautious when it comes to this space. And I think that this is precisely because...

bigger companies are aware of the fact that this is a very delicate area. know, Kasia, my colleague, always says that this is an ethical minefield and I think that's a very good sort of term to use to describe digital immortality or the digital afterlife industry. There are so many potential issues that bigger companies might be a little less likely to...

might be less willing to experiment in this area basically.

AI-tocracy (38:49.969)
They're risk averse to this particular area.

Tomasz Hollanek (38:52.982)
That's what we've seen so far, which does not mean that this trend will necessarily continue. And the reason why I believe it might not continue is that, of course, companies are also looking for new streams of revenue and new ways to mine the data, to repurpose it. this is something you mentioned earlier. mean, it could be for training models. That's one potential avenue. But also, you know, having...

more, well, potentially dead users on platforms might for some, actually in some cases it might also

generate more engagement from living users. So there are different reasons for bigger companies to consider how our digital footprint can be used post-mortem. And there is one other aspect to it, which is they already have the data. Smaller companies don't have the data they need to

They need to get the data from us directly, which means that in some sense we are more in control of what is happening. We might be better positioned to curate the data. At the same time, platform temporality that I mentioned earlier makes it also a risky space, especially if your goal is to preserve yourself or your loved one for eternity.

AI-tocracy (40:16.869)
Yeah, I have follow ups there, but I wanna pivot actually to a bigger question that I've been having throughout this conversation around some of the language that our little AI and death community uses. And I think it's also contested right now. You mentioned right the difference between dead bots and grief bots and there's, we won't get into the inside baseball of all of that, but.

Tomasz Hollanek (40:25.367)
Hmm.

AI-tocracy (40:41.635)
you use the word resurrection a lot and if folks couldn't see on the audio version of the podcast, know, some air quotes around that term. And then we also have the title and topic of this episode, which is digital immortality, which has different connotations sticking with the resurrection thing. Because again, as a minister, I'm like, interesting that we're using this like.

Tomasz Hollanek (40:48.344)
Yeah.

Tomasz Hollanek (41:00.908)
Mmm.

AI-tocracy (41:04.727)
Jesusy connotation in in some ways in a lot of cultures, not all cultures obviously, but for me as like a historically christian minister i'm like, this is interesting it invokes certain things in a western context and maybe around the world, that Seems to be intentional that we're trying to invoke maybe the emotional idea of resurrection or tap into some greater cultural narrative And i'm wondering what you think about the language that you use maybe using resurrection as

Tomasz Hollanek (41:27.278)
Mmm.

AI-tocracy (41:33.991)
a starting point.

Tomasz Hollanek (41:36.206)
That's a very good question and probably my answer would be I should not have used that term. But there is a reason why and I will explain in a second but actually what I usually refer to in academic work is simulation or recreation.

AI-tocracy (41:43.143)
You

Tomasz Hollanek (41:56.93)
which have slightly different connotations, basically they're not necessarily religious connotations. But when it comes to resurrection, I think the reason why the use of this term is appropriate and...

totally worth questioning is that these technologies are not something new, something that, or rather the imaginaries of what this specific technology might offer us should be mapped onto a longer history of thinking about how we might overcome death. This is something that the philosopher Stephen Cave, who also happens to be my boss, who is the director of the Institute for Technology and Humanity at Cambridge,

AI-tocracy (42:35.408)
Yeah.

Tomasz Hollanek (42:41.392)
he writes in his book, which is these very consistent types of narratives about overcoming death and different and how some prehistories of the sort of technological immortalization that we are seeing now could be traced back to, you

Greek mythology but also other cultural traditions. So when we're talking about digital immortality we could be thinking about like resurrection and quite literally Jesus Christ and the idea but also other myths, other mythologies, other sets of beliefs that kind of tap into the same

same idea of what immortality could entail in this case, yeah, bringing someone back to life. This is quite different from

making someone live forever, right? So what Stephen does very well is actually, he's pointing to, for instance, the idea of the elixir, that again, like a very old idea of how we might overcome death, we basically could never die, and how that finds its reflection into contemporary attempts to prolong life and different strategies, technological, biological, medical, make it possible for us

live forever or at least much much much longer.

AI-tocracy (44:13.157)
Yeah, yeah, in some ways I, one of the reasons why I love thinking about and studying this topic and working with Jed Brubaker over at CU Boulder on this topic and hearing his perspectives is that it seems to me, as someone who's been in the AI ethics field for what, I guess eight years overall and seen it change, it seems to get at the heart of this new fascination in the public consciousness with AI because we've seen over the past what,

100 years where the technology story of AI and the dream of having these AI robots that replace humanity and transform how we can live forever, that kind of narrative, which has been around longer, but with AI tech has been around slightly shorter. There seems to be this hope.

Tomasz Hollanek (44:51.566)
Hmm.

AI-tocracy (45:04.473)
or again with this artificial general intelligence community that we're going to recreate humanity in some way and we're gonna replace who we are and all these things that I think a lot of people have heard and that are getting all the clicks in public scholarship and on CNN and all that stuff. And so I guess I'm just, curious about how you see this digital immortality topic tapping into some of those larger.

Tomasz Hollanek (45:22.158)
Hmm.

AI-tocracy (45:32.057)
narratives or maybe trends that we're seeing in AI more broadly.

Tomasz Hollanek (45:37.934)
Well, yeah, that's so you asked me not to use like very academic terms, but I'll use two and but

AI-tocracy (45:43.941)
Nah, go for it. Be academic. If someone's stuck around this long in the episode, they're at least open to some academic terminology about this, so go for it.

Tomasz Hollanek (45:52.162)
Yeah. No, but in some sense, I think we see two, well, we see multiple futures for these technologies, but I think two bigger trends and one dominant for sure, and that is transhumanism. So thinking about how technologies can help us become better human, you know, or more human, that is to say how we can...

how we can enhance ourselves as a species through technology. So that's one way of thinking about technology in general and digital mortality in particular, know, upload uploading in particular, you know, how we can transcend our own humanity, how we can, this is what I mean when I say, you know, more human as it like be, how we can, how we can become something that is beyond the human as we see it now. But then there is the other intellectual trend that we

could sort of, so this was transhumanism, but there is also poshumanism. And so this is really about thinking, this is what I started our conversation with, which is how we can understand humanity through its relationships with others. So how we can decenter humanism, conservative humanism, and anthropocentrism from our conversations about ethics or politics.

And so in many ways this is what I hope we can, I this is very difficult, but I hope we can both make this view of potential technological futures more mainstream, but also maybe make it more relevant to some of these conversations, make it seem more relevant. And so I mean with Kasia for instance we have just finished an article actually about the environmental impact of digital immortality.

In other words, you know, this paradox that I mentioned earlier, which is, you know, we talk about digital immortality as a way to dematerialize ourselves while actually digital immortality is enabled by a very, like a very material set of infrastructures that have a very specific environmental impact and a footprint on our environment. And so maybe...

Tomasz Hollanek (48:09.098)
In that article we tried to think about whether digital immortality can be post-human or rather whether the project of digital immortality can be not only about the preservation of ourselves, whether it can be about more than just the human ego essentially. And the answer is complicated, but in a nutshell it is very difficult. It is very difficult to see how

we might imagine different futures for digital immortality that are not only based on the idea of preserving specific individual identities of people in the digital sphere.

AI-tocracy (48:54.66)
One thing I did wanna ask you about was you've been doing a lot of work, not just in a Western context, of asking other people, I believe doing some workshops with folks in China, if I'm not mistaken, and some other areas around the world. And I would be remiss if I didn't ask you what's going on with that? And how are you thinking about this cultural difference? Because in the US, right, there's this like...

Tomasz Hollanek (49:09.901)
Yeah.

Tomasz Hollanek (49:16.206)
Tomorrow.

AI-tocracy (49:21.714)
Fear this cultural fear of death. We don't want to talk about it. Generally speaking right in the most broadest sense and I'm just curious what what you're seeing around the globe and what prompted you to do that work

Tomasz Hollanek (49:25.933)
Mmm.

Tomasz Hollanek (49:34.476)
So this work that you're mentioning I'm doing with Kasia. So this is part of her bigger project, focused on the imaginaries of immortality in the age of AI. maybe I can focus on the part that we're doing together, which is specifically thinking about new culturally sensitive standards for designing in this space.

ways of thinking about producing products for the digital afterlife industry that might be produced in more culturally sensitive and context specific way. So we organized workshops with

With experts in China, India, and Poland, and when I say experts, we came up with a list of domains that we see central to this conversation. At the same time, experts from these domains rarely feature in the kinds of conversations that we're having right now. So we invited religious or spiritual leaders. We invited people from the funeral industry. We invited palliative care nurses and doctors to the conversation.

conversation.

And we wanted to learn how the idea of digital immortality might be viewed differently in these three places specifically, but also how we can learn something about designing the technologies more ethically and if these technologies can be seen as desirable in the first place. And you know, I would love to hear more of your thoughts on this precisely because of your double background.

Tomasz Hollanek (51:22.576)
But one thing that I think I can say now that was very clear to us based on these workshops was that most religions are actually pretty much compatible with the idea of digital immortality that we are discussing right now. The only religion that seems to...

that essentially there is more tension with this idea of digital immortality in the form of post-mortem avatars, grief bots. And this specific religion is Buddhism because Buddhism is fundamentally about letting go because trying to hang on to things, material and immaterial, is, I mean, in nutshell, I'm simplifying really, but is the source of suffering, right? And so one thing that definitely came up in these

conversations in China and India was that, you know, maybe these technologies could be inhibiting our ability to let go. That is so central to actually well-being and to finding happiness and, well, to finding peace in life and in death.

AI-tocracy (52:31.077)
Yes, I have lots of thoughts about this, but I also know that we're running low on time and people are here too to listen to you. And so I want to bring us, I think, end us with something more grounded again, which is I think your expertise as well is blending this philosophy component of digital immortality and actually talking to people and the practical. And I'm wondering like,

Tomasz Hollanek (52:49.07)
Hmm.

AI-tocracy (53:00.901)
Practically for this technology Maybe dead bots in particular. Where do you see it going within the next five years? Is it growing like as fast as I'm making it sound? Are there like challenges on the ground like if you were to summarize in a few sentences where you see this going within the next five years What would you say?

Tomasz Hollanek (53:18.882)
Hmm.

Tomasz Hollanek (53:22.316)
Right. I would say that again, something that is very clear when we start having conversations with people who are not usually in this space. When I say this space, mean, Western Bay, Anglophone, and the intersection between academia and industry and policy as well. What we get a sense of is that people...

in all of these places that we mentioned are very suspicious of commercial companies designing these products.

So we have to distinguish here between two things, like being skeptical of digital immortality in general as an aspiration and being skeptical of commercial companies trying to enable that digital immortality. And I think what is clear to me is that actually people would like to experiment with it, would like to see how indeed it might enable us to imagine how we could shape our legacy in a different way or how it might enable more people to...

preserve something for future generations thanks to AI. But they do, and this is something we do through speculative design as a method to enable people who usually don't think about designing technologies. We encourage them through speculative design to imagine how these technologies could or would work differently. And what is very clear from that part of our work is that

they imagine it as something that should be not commercially driven but actually may be provided by the state or it should be based on an NGO model but definitely something that is not directly related to interest and become a commercial interest.

AI-tocracy (55:15.577)
Yeah, to your point on what my thoughts are on all of this, I would love to do a follow-up in a year in which we bring on maybe Katya and we talk about what has changed over the past year and where the conversations are. We can call it Digital Immortality Part Two.

Tomasz Hollanek (55:24.366)
Mmm.

AI-tocracy (55:33.327)
Part 2A if we're feeling if we're feeling fancy there as we again we could talk about this for several more hours But as we close I'm asking I'm just stealing directly from Ezra Klein and how he asked people to recommend books But I am wondering because I think it's a fun question if you have a book or other piece of Media movie etc that you would recommend people listen to right now or watch

Tomasz Hollanek (55:33.645)
Yeah.

Tomasz Hollanek (55:51.789)
Mmm.

Tomasz Hollanek (55:58.67)
Sure. So actually coming back to the India workshop, one of our participants mentioned the idea that we need to relearn how to die, that it's a skill that we need to learn, that we need to tap into this cultural and communal knowledge that it's there somewhere. It's just that we don't use it as often.

because of what happened with the industrialization of death, essentially, and the medicalization of death as well as a process. And so thinking about this question of like, how can we learn how to die? In other words, how we can think about death in a more mundane way, in a way that makes us ready for when it happens. I actually, a film that I would recommend is Pedro Almodóvar's The Room Next Door,

which tells the story of a woman who is terminally ill, she knows that no treatment might alleviate her pain or will not cure her, and she essentially asks a friend, not a very close friend, to accompany her to a house outside of the city where they both know she will eventually commit suicide.

And this story of the two women and how, know, one... Well, essentially, guess the reason why I'm recommending this film, it's a pretty difficult one, but it is...

It points to the question of dignity that is central to this conversation and assisted suicide is all about this, this conversation of a person's dignity and the right to decide for themselves under certain conditions on the one hand, on the other hand, and this is central to some of the themes that we have been discussing today, the idea of relationality and how our death, but also what we leave behind affects others that I think we should always

Tomasz Hollanek (58:12.4)
consider to be central when we're thinking about the risks and benefits of these technologies.

AI-tocracy (58:19.534)
Tomas, as we close, where can people reach you, contact you? If you don't want people to contact you, you can also say that. But if folks are interested in this topic or getting connected, how can they do that?

Tomasz Hollanek (58:26.734)
you

Sure. I think LinkedIn is the best way because that's the only social medium that I still use. So yeah, LinkedIn is good.

AI-tocracy (58:37.902)
So just look up Tomas's LinkedIn. LinkedIn is fascinating right now. And again, that's an episode for another day. To listeners, thank you so much for joining A.I.tocracy live. Give me feedback on whether that title is just blatantly clickbait and annoying or

But it's been great to have Tomas here. Some of the most exciting work, at least to me, going on right now, it's really groundbreaking. So keep an eye on his research and on all the great work that his team is doing. But for now, Tomas, thank you so much for joining us.

Tomasz Hollanek (59:17.432)
Thank you so much, this was really great and I look forward to all our future conversations and I really want to hear more from you as well. So thank you so much.

AI-tocracy (59:23.845)
Yeah, I've been I've been very coy this conversation, but yes, well, we will talk soon. All right, I am.

Tomasz Hollanek (59:28.064)
I can imagine, yeah.