How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
KIMBERLY NEVALA: Welcome to 12 Days of Pondering AI. In this capsule series, guests rejoin us to share their insights and intuitions about different aspects of the ever-evolving world of AI. To kick things off, Marisa Tschopp reflects on human and AI interaction. Welcome back, Marisa. It's great to see you.
MARISA TSCHOPP: Good to see you too.
KIMBERLY NEVALA: I think it'd be fair to say this year has provided a bonanza of opportunities to research a human-computer interaction. What has been the most consequential takeaway or finding for you this past year?
MARISA TSCHOPP: That is an enormous question, actually, because as you said so many things have happened. From people falling in love, people breaking up, people committing suicide based on chatbot conversations. So this year has kind of been somewhat crazy.
But from a very opportunistic perspective, of course, as a researcher this brings in all the opportunities for trying to better understand how people interact. How do they perceive machines that talk back to us almost like real humans? And we are unable to actually understand what is real and what is not, what is human, what is machine? These are exciting times for us.
KIMBERLY NEVALA: And have you been surprised? Were your intuitions correct relative to our ability to discern the difference between human and machine? And/or our ability to project and believe a machine may have human intuitions, human feelings, human empathy, et cetera, et cetera?
MARISA TSCHOPP: Yeah. I think it's a mix. And it's really hard to generalize.
So from the research we've done, although I do not want to judge, it's kind of good to see that people do not relate too much to their, for instance, digital assistants as friends. Which comes with some ethical consequences that need a lot of discussion.
On the other hand, when you read the news, what do they pick up? They pick up the people that fall in love with chat bots, have sexual intercourse with their avatars, break up, and get even more whatsoever. So it's always a little bit hard to check back with reality or to figure out where we're at.
But what surprised me was, for instance, when we did our research on human-AI relationship perception, we found that one very prominent perception was the one where we perceive the digital systems more as equal. Not as a friend, but also not as a servant. And this kind of came as a surprise.
But more interestingly, if you see the current developments, for instance, the classic example now is the Microsoft Copilot. So Microsoft has changed their digital assistant, which is now infused and super smart and super great - from the digital assistant of Cortana, rest in peace - to the Copilot. Obviously, this is more on a more equal level, somebody that helps me, assists me. But it's not so much as a servant anymore. It's really somebody who's, yeah, a Copilot. And this resembles very much what we found and the relevance of the idea of how these perceptions may change in the future.
Now, marketing and perceptions are, of course, two different spheres. But it shows a bit of a direction of where things are going.
KIMBERLY NEVALA: Well, even in what you just said, there is something interesting or telling. In that you said this is someone that can assist me and not necessarily something that can assist me.
MARISA TSCHOPP: Yeah, of course. It's always hard. How often do I say people love her, instead of Alexa? And I'm like, oh, come on. You get used to it so easily. And it's always the question. Do you do that subconsciously or do you really think it's a her? Or do you just do that because you're super lazy to say Alexa or ‘the AI system’ all the time? But I try to always say her in big quotation marks, at least when I talk. But humans are lazy. The brains are lazy. So these things happen.
KIMBERLY NEVALA: And that may play into where we are sometimes overconfident in the results. Or, as long as it sounds somewhat realistically human or the result is provided to us in a confident tone, we tend to believe it without as much skepticism as perhaps we should.
MARISA TSCHOPP: Yeah, and why should we? Because we cannot differentiate anymore whether it's written by a human or whether a picture is real or unreal. And this will proliferate in the future. This is clear.
So I think all these questions, whether we can differentiate between machine-written or machine-generated or not, are irrelevant. Because we can't. So the question we should focus on is which context should we focus on? Where is it important that we can differentiate?
KIMBERLY NEVALA: Important point. Correct me if I'm wrong. I saw a post from you in LinkedIn where you said you were surprised by the sheer joy you were getting from working with some of the generative AI tools. Is that right?
MARISA TSCHOPP: Oh yeah. I was invited to give a lecture in a design university or an art university, and they gave me access to Midjourney and a little bit of teaching with one of my friends. And if you have some kind of artsy soul in you… I've been astonished by the results. I've been astonished by the kind of empowerment I felt. I can be an artist too!
It was just amazing. It was simply amazing and also extremely awkward. Because, of course, I've been reading up a lot about where these images come from and all the actors and artists that rightfully act up and fight against these tools. Or at least the non-consensual agreement by using the work. So living in that paradox has been a great, funny, or awkward experience as well.
KIMBERLY NEVALA: Was that reaction surprising to you, given the work that you do in human machine interaction?
MARISA TSCHOPP: No. It wasn't surprising at all. I knew that I would love to do these kinds of things. In the end, I love technology. I love trying out new things. I love digging into the really awkward things. If I could, I would try the dead voices by my grandma: reliving things and to talk to her and see how it feels. So it wasn't surprising that I was being enthusiastic about it. And that's why I never started it from the beginning because I knew it kind of would be wrong, so to speak.
KIMBERLY NEVALA: That's an interesting point. You had some pre-knowledge and made a deliberate decision because you knew it would be enticing. I'm not sure that that's necessarily available to the broader public until it's too late.
MARISA TSCHOPP: It’s happened to me quite a few times now, I'd say in the past seven years. So I remained be strong, saying, I'm not going to use this app. I'm not going to use that app. I'm not going to use this Lensa whatsoever.
And then there comes a point where something happens and I am weak in the flesh and I use it. Or it's some opportunity. I cannot teach in a course and talk about Midjourney and I've never used it before. That's not working out for me because you have to practice what you preach, so to speak. At least have an idea of what you're talking about.
I have a really weird example. So have you seen the Barbie movie?
KIMBERLY NEVALA: I have not yet oddly.
MARISA TSCHOPP: Well, no problem. But you know the Barbie movie.
KIMBERLY NEVALA: Yes.
MARISA TSCHOPP: So there was that ridiculous app where you could Barbie yourself. So you put your picture in there. And then you come out as a Barbie. And I don't know what it was. It was a Tuesday night. I was bored. I was at home. I didn't know what to do. And would have never done that because I have my values. But what did I do? I Barbied myself. I loved it. I loved looking at myself as a Barbie. And I laughed. I even used it for presentations.
So these things continue to happen to me within the past years. And I think it's that we're all humans. It's part of being human. There are other forces that drive your behavior then the constant, deliberate thinking about do I want to use this system or not? Where are my values? Is that a good system? Do I trust that system?
KIMBERLY NEVALA: That in and of itself is a very insightful finding for us to ponder a bit. So all of that being said, what do you think is on the horizon for human-AI relations (I say tongue in cheek) in 2024?
MARISA TSCHOPP: I do think, I want to say that I'm not sure if there's any other way in the future to see human-AI interaction differently. Other than being in a somewhat entangled relationship. All other proxies or all other things we looked at are just not enough to describe how deeply entangled our life, our perception, our emotions and thinking is with the use or the infusion or the collaboration of technology.
So with the much expected, increase in agency, performance, emotional simulation of emotions, and words and language and all these kinds of things, I do think that this is the only way to go forward. Because it gives us the opportunity to look at it from more levels, from different levels. And this, again, gives us the opportunity to make more differentiated decisions, hopefully.
KIMBERLY NEVALA: If you had your druthers, what do you think should be our areas of focus?
MARISA TSCHOPP: I think we're still in a stage where we have to truly explore and confirm where we're at. We're not having tons of research readily available. And also the problem is that technology is evolving so quickly and readily and in such a pace that the findings from last year are already so old because the language systems have developed so immensely. And the marketing, products going on and off market and so on. So I'm not sure. That doesn't really answer your question.
But what I'm trying to say is it's a constant thing. And as relationships with humans evolve constantly, these relationships with our AI systems will evolve constantly. So it's absolutely thinkable that in two years from now, either the Copilot is dead, or the Copilot is everywhere, or the Copilot has evolved to my friend next door, or it's back to the servant mode. It's a little bit unpredictable. But we do have schemes. And I believe that we can use these schemes to explain this behavior and make better predictions about the future.
KIMBERLY NEVALA: Well, I'm going to hope that that prediction about our ability to make better predictions and therefore react to these systems in a rapid and appropriate fashion comes true. Thank you so much.
MARISA TSCHOPP: Thank you for the invitation.
KIMBERLY NEVALA: 12 Days of Pondering AI continues tomorrow. Subscribe now for more insights into what's happening now and what to expect next in the ever-fascinating world of AI.