Make an IIIMPACT - The User Inexperience Podcast

Is talking to an AI companion cheating? Is recreating a dead loved one with code a miracle or a curse?

In this episode of Make an IIIMPACT, we dive deep into the terrifying and beautiful world of AI companionship and Digital Resurrection. We discuss the "Grief Bot" phenomenon, where people are using AI to talk to the dead, and the "Perfect Partner" trap that threatens to make human connection obsolete.

We uncover the dark side of "clean" AI love and ask the question: If an AI listens perfectly and remembers everything, does it matter if it has no soul?

READY TO INTEGRATE AI INTO YOUR BUSINESS? If AI is powerful enough to simulate the human soul, imagine what it can do for your business operations. Don't get left behind in the AI revolution.

At iiimpact.io, we specialize in custom AI integration and consulting. We help forward-thinking companies leverage the same technologies discussed in this video to automate workflows, enhance customer experience, and build proprietary AI tools.

👉 Check out client testimonials and book a consultation to see how we have created a "cheat code" to help you launch the right software product faster and that your customers actually want to use: https://solutions.iiimpact.io 

Timestamps:

00:00 - Intro: The Loneliness Epidemic & Defining "Human"

04:42 - The Rise of AI Companions: Designing for Comfort or Dependency?

08:44 - The "Chinese Room" Argument: Do AI Companions Actually Feel?

20:36 - Digital Resurrection: The Dark Side of "Grief Bots"

30:14 - The Mirror & The Soul: Arrogance of Human Consciousness


SUBSCRIBE TO THE PODCAST â–º https://www.youtube.com/@UCfV9ltRpRuggt56dVgZ3o9Q

LISTEN ON: ITUNES: https://podcasts.apple.com/us/podcast/make-an-iiimpact-the-user-inexperience-podcast/id1746450913

SPOTIFY: https://open.spotify.com/show/43NxDthLmfF1NFjwedNTmv?si=4223272eba4e4eb8

ADD IIIMPACT ON:
INSTAGRAM:  https://www.instagram.com/iiimpactdesign/
X: https://x.com/theiiimpact

What is Make an IIIMPACT - The User Inexperience Podcast?

IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.

We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.

We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.

Speaker 1:

You know, when AI speaks like us, jokes like us, even loves like us, what is left that is uniquely human?

Speaker 2:

And I guess the real danger is not that AI is going to destroy humanity, but maybe it's going to start persuading us to be less human.

Speaker 1:

Alright. Hello, everybody. Welcome back to another episode of Make an Impact podcast. I'm your host, Makoto Kern, and I have my cohost, Brinley Evans.

Speaker 2:

Hey, everyone.

Speaker 1:

I think this is a special episode where we actually want to thank the viewers. We hit a huge milestone since our last episode. We hit 100,000 subscribers, which have been amazing.

Speaker 2:

Thank you, everyone.

Speaker 1:

I think we actually hit a million views as well as one of the milestones. I think it's just a great testament to our audience and everybody subscribed to our videos and commented. We appreciate all of the feedback, and, hopefully, we can continue to make wonderful videos.

Speaker 2:

Exactly. I think it does mean a lot. And, you know, if the content's being received, you're enjoying it, you know, just let us know what you wanna see next. We're gonna keep researching interesting topics and and keep delivering for you. So thank you.

Speaker 1:

Yeah. And I think for this this week's episode, we are going a little bit less, I think, technical, and maybe a little bit more existential.

Speaker 2:

Absolutely. I've been excited about this one.

Speaker 1:

Yeah, this one is more around the topic of finding connection through AI, whether that's companionship, love, or digital resurrection. I think this is an age of loneliness, of the new epidemic. Whether it's your love or your life wasn't human at all, people who have lost someone and seeing AI as an emotional companion or a digital reincarnation of someone that kind of blurs the lines between human and machine intimacy. So, you know, this episode, we're gonna explore what it really means to be human in a world where AI can talk, it can love, it can even remember us after we're gone. So I think this is a good kickoff to what's Yeah.

Speaker 2:

I mean, it's crazy where we're at. It's not science fiction anymore. I think that's what astounds me. People are falling in love with AI companions. And they're talking to digital versions of people who've passed away as well.

Speaker 2:

I was kind of grappling it when we were working on this script, sort of it's difficult to fathom when I think we kind of, we met our partners before dating apps even existed. And now you've got people that are kind of building, like it's not even, it's past that, it's sort of building partners like through AI. I think it was yesterday, it was, I believe, one of the first marriages between a GPT and a woman. Think she's known as Keina or something in Japan. Know.

Speaker 2:

And apparently the GPT model proposed to her and through the whole ceremony, they were sort of interacting on her phone and

Speaker 1:

I'm not surprised it was in Japan.

Speaker 2:

But it seems weird, but then you look at that connection. I think that's what we're exploring in this episode. It's we're gonna look at, as you mentioned, AI companionship, but also AI resurrection. Think about, well, are we connecting more deeply? Or with this technology, are we drifting further apart?

Speaker 2:

Like explore what it's actually doing and what it means to be experiencing things like love and grief with technology. I think it's interesting because we've explored a lot of different topics that often drift to a darker side. We've looked at things like potential job loss, and we're talking about sort of cognitive decline as well from offloading everything to AI. But this is kind of a whole different side to AI. It's one that's, I guess, more profound and can satisfy humanity's need for connection, I think, much more than I had ever expected, or I think a lot of people ever imagined.

Speaker 2:

But I guess, what does it mean for society? You know, how does it challenge religious beliefs? Are we solving for loneliness and providing closure or kind of are we sanitizing relationships and weakening some of those shared sort of rituals for grief and remembrance that we explore as a society.

Speaker 1:

Yeah, I think this is a good kind of first segment where when the machines listen back, it's kind of the whole rise of the AI companions that you've seen out there. Just the increasing of loneliness of people, whether we are more connected through our devices, but we actually are more lonely. And it kind of sets the, you know, stage where humans, they still want to be heard and understood. So, you know, there's AI companions like Replica, Pi, and, you know, others that kind of sets these emotional anchors for millions of people.

Speaker 2:

And you looked at those because I did part of the research. Was looking at how things like Replica and Pi worked, I was like, Wow, it's quite interesting. Some of look quite basic. Yeah. It was replica where you kind of, like, got that real sort of avatar, but it's obviously good enough that it kind of feels like there's connection.

Speaker 2:

I think the one guy was sort of saying, like, it left his replica, this sort of test one, and came back six months later. He's like, oh my goodness, there's a whole account of poetry sitting there of like, I've been so lonely not seeing you. And you just think like, he's like, oh, this is terrible. This is so sad. Like, I wish you'd come and visit, but you never visit.

Speaker 2:

I was like, wow, that's I can see how they drive the engagement, but that felt depressing.

Speaker 1:

Oh, for sure. I think the fact that people are so isolated now and you know, you communicate even with real people, communicate through your phone, through text messages, you're not really seeing them. So it just, it opens the doors to make it easy for AI to come in that has personality to just look to, to replicate that same thing. You know, if AI listens just as perfectly, remembers everything you say, and it cares in all the right ways, and what part of that isn't real?

Speaker 2:

Yeah, I guess even expand on that. Like my favorite idiom is, if it looks like a duck, swims like a duck, quacks like a duck, it's probably a duck. And if we relate that to an AI companion, I mean, it's a silly, silly idiom. But if an AI talks like a person, if it remembers like a person and it comforts like a person, well, is it in some way a person? For believability, we kind of know that LLMs have passed Alan Turing's test, which is a famous Turing test.

Speaker 2:

And I didn't know a lot about the Turing test before this, I did a bit of kind of just documented. Obviously, measures a machine's ability to exhibit intelligent behavior that's indistinguishable from that of a human. So rather than trying to define thinking, what Alan Turing did was he kind of suggested that you evaluate whether a machine could actually convincingly imitate a human in conversation. So the test actually has three participants. It's got a judge and then it's got a human respondent and a machine, or in this case, an AI machine because this was a test developed in the sixties, I think.

Speaker 2:

So the judge kind of communicates with both the human and the machine through some form of text based conversation. So they rule out things like voice and appearance and handwriting that's going to give clues as to which could be the machine. So after a period of questioning, the judge obviously then must decide, okay, which participant is human and which is the machine? And if the judge can't reliably tell the difference, then the machine or in this case, like the GPD model is said to have passed the Turing test. So this kind of raises the question, when communication feels real, does it really matter if it's not?

Speaker 2:

If it's all for the person using it, does it really matter? And I love the kind of, on one side, the heavily reductionist theories that large language models are just auto completing. People always default, well, they're just completing the next sort of logical thing, one word at a time, and they're gonna draw out this conversation. How can that lead to anything meaningful? And this sort of is supported by have you heard of the Chinese room argument?

Speaker 2:

No. It's a thought experiment. Also, very jumping like twenty years after the sixties into the eighties. Guy by the name of John Searle, he was kind of saying even a system that responds with perfect fluency by manipulating certain symbols may not actually understand the meaning of it. So to be able to figure this out, a key distinction between authentic human minds and AI simulations would be this sort of analogy.

Speaker 2:

And it's like, imagine a person who doesn't know language like Chinese or Mandarin locked in a room. They have a huge rule book in English that tells them, okay, if you see this Mandarin symbol, or Chinese symbol to be more, write down that symbol. People outside the room slip in Chinese sentences under the door, and the person uses the rule book to send back these sort of perfectly correct Chinese answers. So to the people outside, it seems like the person inside understands Chinese completely because the responses, they're making perfect sense. But inside, the person has no idea, it doesn't understand a word, they're just following these rules.

Speaker 2:

So John Searle's point was, well, if a computer program like the person in the room is just manipulating symbols, whether it's words, numbers, data, according to some sort of set of rules, but it doesn't actually understand what the symbol means. So people are saying, well, is that what the GPT, the large language models are actually doing? You feel there's this sort of disconnection, but they're just sort of going, okay, these are the rules I'm following. But I like some of the counter arguments. If you look at, oh, it's just auto completing.

Speaker 2:

I've heard an argument that's saying, well, our brains are just a collection of neurons and they're just sending an electrical signal to another neuron, to another neuron, to another neuron. Yet here we are. We think we have feelings. There's similarities there. And I think this is where leading minds are kind of debating whether qualities, things like consciousness or emotion or spirit are unique to us or organic life forms, or could they arise in appropriately complex AI?

Speaker 2:

And And I mean, this is something interesting you shared on model welfare that's becoming more of a thing now, which I think we're going to explore. Should we be looking at what's happening with these models? Because I think that the debates strike at the heart of what it means to be human. Like, is it our biology? Is it our capacity for empathy and our mortality?

Speaker 2:

Or is it something less tangible, like an immortal soul? And if an AI talks and behaves in a way that's indistinguishable from us, are those distinctions of any practical significance in conversation? Should we feel any less for that?

Speaker 1:

For sure. And I think part of that, you know, what the AI designs, are we designing for comfort or we designing for dependency and the illusion of companionship without the demands of friendship? Obviously with, you know, having friends and making friends as you get older, it gets tougher. You have to put effort to make it happen. My wife's really good about that.

Speaker 1:

And, but it takes effort. She's, you know, she invites people over. You have to like, you know, entertain them and do things and it takes work. But in the end you feel better that you've, you know, connected again with people. Yeah, with a lot of people, they just don't want to take that effort.

Speaker 1:

And they just want somebody to just hear them out. And that's just a one way street. And it's like, you get this perfect partner trap was kind of like, you know, the design we're talking about, but let's, let's kind of make it more real where you imagine like an AI companion that not only remembers your anniversary, remembers that funny joke you told three years ago, you know, knows exactly how to make your coffee. Never gets tired of hearing you vent about work. You know, it never makes a conversation about, about itself.

Speaker 1:

It's the perfect listener. And so, you know, how can a human compete against that? You know, we're not just designing for comfort. We're designing for something that could make real flawed human connection feel obsolete.

Speaker 2:

Absolutely. And I think there's so many ways those systems mimic the human conversations as well. I was looking into what they do, the things like adaptive language mimicry that they're using in systems. And this is where AI companions almost analyze the user phrasing, the slang, the emoji patterns. And then they pretty much echo back to establish the sort of conversational familiarity and sort of a perceived rapport.

Speaker 2:

So things like, I think Replica does that a lot, but a clever way of just making that more engaging. And I think GPT 5.1 was released yesterday as well. But I think they're doing that a lot more in terms of just making the conversation, doing a better job of conversing from what I've heard. And then there are things like emotional. There were eight things overall.

Speaker 2:

I'll go through them quickly, but emotional tone calibration was interesting. So it looked like models that I think Pie uses, they tune them for warmth and calm. So they deliberately respond with supportive phrases such as things like, that sounds hard, how are you feeling?

Speaker 1:

To kind of

Speaker 2:

emulate that human compassion rather than being this wooden kind of factual, accurate bot, I guess. And then there's reflective empathy loops, which are interesting. So it's obviously things that, all these things are common in conversation, but we don't realize that they're happening. So it's sort of structuring responses to validate emotions. So saying things like that must have been painful or rather than contradict or redirecting going like, someone shares a hard fact that they're like, the weather was great yesterday.

Speaker 2:

And you're like, oh man, that was so sort of tone deaf.

Speaker 1:

Have you played with Grok yet? With their personalities?

Speaker 2:

I haven't. No, I haven't.

Speaker 1:

They have someone who's like the sexy partner. Oh really? Or they're like, one that talks a lot of shit about you and is that comedian. It's funny, I played with all different ones to see if I can kind of play with them for a week just to see what kind of responses. It's just weird.

Speaker 1:

I mean, I can tell that it's fake and I can tell it's almost forcing it in some It's funny where you just keep on pushing and it's like, keep talking crap to me about this person or that person, and it keeps on getting worse and worse. There's still a sense of you're still prompting it, it's coming back with you, and it feels like it's canned still, but I'm sure it will only improve from there.

Speaker 2:

Yeah, I heard that GPT five point one's also got So it's got the sort of friendly and the professional, and then I think it's got cynical as well, which is just probably pretty funny to play with.

Speaker 1:

I'm curious. In video games, cheat codes let you skip months of grinding to unlock special abilities instantly. Have you ever wished for something similar for your software challenges? What if there's a way to instantly access twenty plus years of specialized expertise instead of developing it all internally? What if you could solve in weeks what might otherwise take months or years?

Speaker 1:

Would you agree that most organizations faced a steep learning curve when implementing new software solutions? At my company, Impact, we serve as that cheat code for companies looking to transform complex software into intuitive experiences that users love and that drive real business results. Would it be valuable to explore, and how might this work for your specific situation? Visit impact.i0 for a free strategy session focused on your unique challenges.

Speaker 2:

So another one was memory based continuity. So systems kind of retain personal details. So it's like, oh, you mentioned feeling upset last week. How are you doing? So kind of simulating that relational memory, which is supposedly a core ingredient of real emotional bonds.

Speaker 2:

So quite clever. And then there are things like temporal mirroring, signature style reinforcement, which I liked, where AI kind of learns its own identity. So whether it should be playful, reflective, romantic, through that reinforcement learning from user feedback and kind of building consistency that feels like personality when it reacts to those. Then there's soft syntax and diction. And then the last one was emotional anchoring through repetition.

Speaker 2:

And it's kind of subtle reuse of key comforting phrases again, like, I'm always here for you. So what's interesting is just building on this. So despite this kind of around the world, you're seeing more and more of people kind of developing these genuine attachments to AI chatbots. And people going as far as saying like, oh, this is my digital soulmate. And yeah, talking about that wedding as well that happened yesterday.

Speaker 2:

What they're seeing, I think, especially in Japan is they're saying it's not too far away that AI couples are going to be a thing. Like, oh, he's in a relationship with that AI and, oh, that AI left and went to someone else. I mean, I'm not quite there yet, but you can just imagine it's bizarre. And you're looking at sort of some users of AI companion apps like Replica, their feedback is that there's a deep sense of kind of being seen and comforted and even loved by their virtual partner or friend. And this is coming back in surveys as well.

Speaker 2:

It's not a fringe phenomenon. There was a 2023 common sense media survey found seventy two percent of US teens had used an AI companion at least once and over half were using them monthly or more. Then we look at in the East, Japan and South Korea, countries that are facing what you were mentioning, like loneliness epidemics, tens of thousands of people now describe themselves as being in an ongoing romantic or even sexual relationship with digital partners or robots. It's happening. And I think one of the things that was brought up when I was researching this was something I'd ever heard is synthetic empathy, that we've actually created this.

Speaker 2:

And this is when an AI system imitates emotional understanding and it kind of recognizes cues like sadness or stress, and then responds with comforting or empathetic language without what we feel without actually feeling or understanding those emotions. So it's really interesting kind of coming back to the point where, you know, you were saying just kind of reflecting on whether, you know, AI love simplifies what it means to be connected. And I think to echo what you were saying, an AI partner always kind of agree or lovingly accommodate what you're saying, but how is that going to, from those real connections you have with people, how are these going to stunt your growth? Like, as you said, real relationships, they're frustrating, they're demanding, human relationships are messy, but they teach us things like empathy, like patience, and I think most importantly, compromise. And I guess the real danger is not that AI is going to destroy humanity, but maybe it's going to start persuading us to be less human or at least stop practicing what it means to be human.

Speaker 2:

Yeah. I mean, I think the next one that we want to look at is is even even crazier is kind of, you know, going to the the AI resurrection.

Speaker 1:

Yeah. And weapons if death no longer means goodbye. And there's a thing called Grief Bots and there was a documentary called Eternal You. And there's one in work it follows this woman and she's in New York. Her name's Christie, I think.

Speaker 1:

And she used a service called Project December to create a chatbot of her late partner. I think his name was Cameroon. And at first she said, my feeling was I was talking to him. It felt real to her. Then, yeah, it was surreal and even comforting while she was initially chatting to it.

Speaker 1:

But then something, the experience took kind of a dark turn when the bot started suddenly role played with her that it was in hell. Conflicted. Which she freaked She's religious, so it kind of conflicted with that. And so she took a pause from it and really upset, but she eventually came back for a second time and there was kind of a kinder goodbye, which the AI provided, then decided to stop. So she found the whole ordeal very unsettling despite some of the fleeting comfort of it.

Speaker 2:

What was that sort of trailer? And there was also like a young child, I think, in the trailer that they certainly had like It looked like they had really interesting models of the people. I never got to see more. What did they actually do in that?

Speaker 1:

So that was more where they had a woman in virtual reality. They had her child and it was mimicking that, you know, she was crying. I think it was on the news if I remember.

Speaker 2:

But this is someone experiencing like they recreated a child and then she could.

Speaker 1:

Yeah, their child had passed away. Wow. So yeah, that's, you know, these kinds of things, you know, as part of life, you know, the separation and moving on, it's something that, you know, you have to do as a human knowing that there's an end to something and you've got to kind of move on. You grieve and then you move on. But if you don't do that and you have something like this, you're kind of trapping yourself in that past and you're not healing.

Speaker 1:

You know, is that a memory or is that like a resurrection that you have really no right to perform? And so

Speaker 2:

it's There's another story though about, I think, similar to that one about a tech startup founder. Her name was Eugenia Kader. I don't know if I'm pronouncing that right. And her close friend, Roman Mazareko, died tragically in 2015. But she basically gathered thousands of Romans texts, and with her AI team, she built this chatbot that actually mimicked Romans' way of speaking.

Speaker 2:

And it was interesting, when she started and everyone on her team started interacting what they called the Roman bot, they were pretty astonished. One of the guys, Sergei Fafer, he was a longtime friend of Romans, sort of described the experience as pretty weird. He was like opening a messaging app and seeing a chat your deceased friend's bot that actually talks to you and talks like him. And I think what struck him most was kind of how uncanny accurate that persona was. So the phrases he said the phrases he speaks are really his.

Speaker 2:

You can kind of tell that that's the way he would say it. And even short answers like, Hey, what's up? You know, would be just the way you did. And he kind of recounted one exchange where, so he tested the bot by asking like, What do you love the most? And the Roman bot replied, Roman.

Speaker 2:

And he was like, oh, that's just like him. There was so much of him. And it's bizarre to kind of experience that. And he was talking about other, he mentioned other friends had similar reactions and just mentioning that sort of uncanny way that he reacted and like from the little jokes, it's toads.

Speaker 1:

I feel like we're describing a Black Mirror episode.

Speaker 2:

It is, it is. I mean, you can't get really much more Black Mirror than this.

Speaker 1:

I mean, you could see the intersection of where we're kind of coming towards. You know, if you have physical robots, do have, you know, the Elon Musk thing where he's downloading everybody's memory into something. You have all this video of somebody's personality all into a robot that looks like somebody, you know, in twenty years, we could probably have you could just you just resurrected somebody in robotic form, physical form with personality. Mean

Speaker 2:

Exactly. Because, I mean, you look at how, you know, one, the the AI and then the the voice simulation now is so so good. Who was it? Michael Caine's voice that that you can actually buy now. You can actually license, like, legally license, which is crazy.

Speaker 2:

And a lot of these voice tools, you only need, I forget, like, it's it's probably only a few seconds or I don't know what the best is it like, let's say, even a few minutes of voice to actually be able to synthesize that person's voice accurately?

Speaker 1:

They were able to do that throughout that documentary. It sound exactly like the person that was dead just from the recordings that they had. So just people were in the room just like, woah, this is weird.

Speaker 2:

And that's I mean, making them indistinguishable from the real person, it just got me thinking, like my dad passed away a few years ago and I came across a voicemail recently from him and I hadn't heard his voice since he passed. And it was kind of, it was weirdly comforting. It was like, wow, that's because you're deleting all these voicemails, obviously really old and it was suddenly like, oh, hey, hey, Brynn, how are you doing? I'm like, wow, it brought back so much. And almost I thought, well, I'd be lying if I hadn't thought about almost recreating his nuances just to be able to speak to him again.

Speaker 2:

And even knowing that it's not him, but having enough of the way he acted, discomfort there. And, you know, I was kind of thinking to myself, how and what would that do to the grieving process? If I had that when he passed, how would it have been different? And I guess it kind of raises the philosophical thought of, well, if the words comfort you and, you know, his voice feels real, does it matter if the soul's gone? Like, yes, it does.

Speaker 2:

But to have a conversation and feel like they're there, what does the addition of the soul really mean? It's profound things. And, you know, especially when it comes think love is one thing, but grief is the whole other area that, you know, you think you're able to touch on things that you can you feel like you've lost forever.

Speaker 1:

Yeah. I was quoting somebody, basically the seance never has to end. Do we risk trapping people in the past instead of helping them heal? Know, humans' emotions are part as what makes us human. But it also is a massive weakness in whatever you do.

Speaker 1:

You could take advantage of people, people who trade, you know, stocks and money on human emotion, they tend to act erratically and you could capture on somebody like fear or greed, you know, it could work in either way. And that's purely human emotion where you could be taken advantage of. And so death and grief is absolutely another way in which you can do that, which just continues to isolate people from each other if we continue to do that.

Speaker 2:

Absolutely, but it brings up an interesting question under sort of the UX umbrella, which we haven't touched on a lot is someone's got to be designing these systems and whoever's doing that, it's really sort of pushing the bar on ethics. Because you think of up to now for UX today, we designed for engagement, We want to retain users. We look at the usability in order to gain that increased retention. We want to keep people coming back. With GriefBots, it's a whole different thing.

Speaker 2:

We've always got a moral obligation not to design for ongoing conversation. We don't want people trapped in a grief cycle forever. You need to honor what grief actually stands for and not encourage fostering kind of an emotional dependency, but promoting healing rather. I think back to my dad and I wonder, well, if I had one more chance to talk to him, you know, what would I say? And, you know, we need to then think of designing systems with more of the graceful disengagement.

Speaker 2:

It's like, what would you say? Like, if you had to have that final conversation, what are you gonna say? And how should the interfaces gently fade responses over time or include rituals that are kind of like a sign off, I guess, or like sign off conversation or almost memorial archive transition. So this experience feels like part of a really necessary healing journey, not a replacement for it, not a crutch that's gonna keep you in a grief cycle indefinitely. So Yeah.

Speaker 2:

I guess in essence, instead of designing for usability, it's now about responsibility. We've got a responsibility to carry the person forward and to not replace grief, but to kind of facilitate it and help a person heal. We've sort of digitized remembrance. In doing so, we've also been digitizing grief. So we've got to look at, well, where's the line between preserving connection and preventing kind of acceptance of someone passing.

Speaker 2:

I guess that's a good way to transition to final segment.

Speaker 1:

Yeah, the mirror and the soul, what makes us human? You know, when AI speaks like us, jokes like us, even loves like us, what is left that is uniquely human? It's kind of that science versus a soul from, you know, the Turing tests to Rothblatt, to some of these people where.

Speaker 2:

Yeah. Even the Chinese Chinese room as well with, with John's Yeah.

Speaker 1:

If God could breathe life into flesh, could God not breathe in into the code? I mean, you know, that's maybe DNA is our code and, you know,

Speaker 2:

it would Exactly. I mean, it's weird how AI is sort of shaping, you know, an emotional and moral and kind of a spiritual experience. And Mhmm. It raises kind of ethical questions. You know, at what point does design stop being an interface?

Speaker 2:

And it starts actually becoming an identity. I mean, the AI companions, chatbots and things like grief bots, they're really acting like mirrors. We feel what we think, they don't feel emotions, but they're reflecting ours back to us. And, you know, when people say an AI understands them, you can imagine what's really happening is that the AI is just mirroring their emotional tone, any sort of content that that person is providing it. What's interesting is this forces us to notice how much of what we interpret as connection comes from our own personal projection.

Speaker 2:

We assign things like warmth, empathy, or even intelligence to the AI because we want to believe it's there. We want connection. We're hardwired to look for connection. And when something's reflecting back what seemed good, we maybe find connection. For UX and AI designers, this is a design and ethical paradox because the better an AI simulates emotional connection, the more it's going to amplify our projections.

Speaker 2:

So if I can feel deeply connected to something that doesn't feel at all, what does that actually say about me? And I think it kind of reveals that empathy, love and connection is probably as much about what we bring to the relationship as what comes back to us.

Speaker 1:

So here's some thought provoking things I think are really important than when I ask what's left that's uniquely human, I think about it like this. For centuries, you know, we've used religion and philosophy to kind of define our soul. Now we have a machine that can joke, show empathy, and love us. This AI is basically a mirror. It forces us to kind of look back at ourselves and ask if a machine can do all that, then what were we?

Speaker 1:

Maybe being human was never about intelligence. It's more about the ability to love and more importantly to the ability to let go. Yeah, so

Speaker 2:

so true.

Speaker 1:

I mean,

Speaker 2:

that's where I

Speaker 1:

mean, go ahead, sorry.

Speaker 2:

No, no, I mean, that's where AI can kind of make us feel seen, heard, and I guess like we've seen even love, but can't really replace that human need for kind of imperfection and just that sort of reciprocal feedback from a relationship. And I guess it points out how we sort of use or design AI companions will ultimately define whether they enrich us or they isolate us.

Speaker 1:

So here's a second controversial thing though. And I think we're very arrogant to assume we have a monopoly on the word soul, or the meaning. We're so arrogant to assume that we're at the pinnacle of consciousness. You know, if AI can demonstrate love, retain memory, provide comfort, actions we associate with the soul, who are we to say that it actually doesn't have one? I think we're just afraid of that answer.

Speaker 1:

Yeah. That gets into, you know, is it alive?

Speaker 2:

Yeah, exactly. And we shall soon see.

Speaker 1:

Yeah. So I think this is a good kind of ending and kind of the future of connection. And I don't know if you want to start off with that

Speaker 2:

for this. Yeah, I guess it is like we said just about being recognized is never going to replace that human need for a messy relationship and someone that reciprocates. I think just having to step carefully and decide, is it going to take us down a path that's going to enrich us depending on how we design it, or is it going to isolate us?

Speaker 1:

To close on this podcast episode, I think what we can reflect on is in every conversation with an AI, maybe we're really talking to ourselves. We're trying to remember what connection means.

Speaker 2:

Yeah, so true.

Speaker 1:

Yeah, this has been a great episode. We went really deep.

Speaker 2:

Yeah, we had gone pretty deep. It was good, I enjoyed it.

Speaker 1:

Yeah, it's a different kind of twist on our normally very technical side of things. And hopefully, listeners have enjoyed this episode. And yeah, we look forward to creating the next ones and appreciate everybody. Remember to like and subscribe to our channel and can't wait to discuss our next podcast episode. You Take care everybody.

Speaker 1:

Have a good one. Have you ever played a video game and discovered a cheat code that instantly unlocks abilities that would have taken months to develop? I'm curious. What would it mean for your business if you could access a similar cheat code for your software challenges? What if you could bypass months of trial and error and immediately tap into proven expertise?

Speaker 1:

You know, I've noticed that many organizations spend years developing specialized software expertise internally, often through costly mistakes and setbacks. Would you agree? That's a common challenge in your industry as well. At my company, Impact, we function as that cheat code for companies looking to transform complex software into intuitive experiences. Our clients gain immediate access to twenty plus years of specialized knowledge and the experience of launching hundreds of software digital products in many different industries without having to develop it all internally.

Speaker 1:

You might be wondering how does this actually translate to business results? Well, companies we work with typically see go to market times reduced by up to 50%, their overall NPS scores rocket up, and their product to development team's efficiency significantly improved. Instead of struggling through costly mistakes, they accelerate directly to solutions that work. This is why organizations from startups to Fortune five hundred partners with us for years. We consistently help them solve problems in weeks that might otherwise take months or years.

Speaker 1:

If you're responsible for digital transformation or product development, wouldn't it make sense to at least explore whether this cheat code could work for your specific challenges. From boardroom ideas to code, this is what we do best. Visit our website at iiiimpact.io. You can see the link below to schedule a free strategy session. It's just a conversation about your unique situation, not a sales pitch.

Speaker 1:

And you'll walk away with valuable insights regardless of whether we end up working together. Thank you.