Pondering AI

Shannon Mullen O’Keefe champions collaboration, serendipitous discovery, curious conversations, ethical leadership, and purposeful curation of our technical creations.    

Shannon shares her professional journey from curating leaders to innovative ideas. From lightbulbs to online dating and AI voice technology, Shannon highlights the simultaneously beautiful and nefarious applications of tech and the need to assess our creations continuously and critically. She highlights powerful insights spurred by the values and questions posed in the book 10 Moral Questions: How to Design Tech and AI Responsibly. We discuss the ‘business of business,’ consumer appetite for ethical businesses, and why conversation is the bedrock of culture. Throughout, Shannon highlights the importance and joy of discovery, embracing nature, sitting in darkness, and mustering the will to change our minds, even if that means turning our creations off. 

Shannon Mullen O’Keefe is the Curator of the Museum of Ideas and co-author of the Q Collective’s book 10 Moral Questions: How to Design Tech and AI Responsibly. Learn more at https://www.10moralquestions.com/

A transcript of this episode is here

Creators & Guests

Kimberly Nevala
Strategic advisor at SAS
Shannon Mullen O’Keefe
The Museum of Ideas, IEEE SA Positive Planet Initiative

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI. I am your host Kimberly Nevala. Today, I'm beyond thrilled to be joined by Shannon Mullen O'Keefe. Shannon is the Chief Curator of the Museum of Ideas - that is exactly as cool as it sounds - and the collaborative co-author of the Q Collective's new book ‘10 Moral Questions: How to Design Tech and AI Responsibly’. Thank you for joining us, Shannon.

SHANNON MULLEN O'KEEFE: Thank you for having me, Kimberly.

KIMBERLY NEVALA: To kick things off, can you provide a quick synopsis of your background, and what motivates you in these very varied endeavors?

SHANNON MULLEN O'KEEFE: I spent most of my life leading professional services teams. So most of my life I've actually been a leader of teams, sort of curating people and their ideas to perform for companies. But right now, I'm actually what I call the curator of the Museum of Ideas, where I love to partner with leaders, thinkers, and everyday experts to curate their ideas. And what that usually looks like, actually, is me doing what you're doing today. I partner with thinkers and I most often write articles - that's what usually happens. But I'm also now working on an art installation and a global collaboration. So this can take lots of different forms.

KIMBERLY NEVALA: I think this is fascinating. In fact, I found you - I won't say randomly because I was purposely looking for content - through some of those articles and was really impressed by the breadth of topics that you were covering. Now, you recently published a book with the Q Collective. Can you tell us how that specific collaboration came about?

SHANNON MULLEN O'KEEFE: I'm a member of a community called the House of Beautiful Business.

This is about 25,000 people that call themselves the Network for the Life-Centered Economy. I'm a resident in that community, and I have been for a while. In 2021 or so, they put out a call to their community. And they said, would any of you be willing to craft 10 moral questions for tech creators? A few of us responded to that call.

So I and my four co-collaborators on the recent book said that we would. We created this framework of 10 moral questions and five values. We initially presented that at a conference in Lisbon, Portugal in 2021 that was a part of the festival for the House of Beautiful Business. From there, we decided to continue to collaborate.

I think it's a great story because the story is really about them just asking for people to come forward to do something like that. And we just did it because we wanted to as a community. There was no prize. There was nothing else except for that we had an interest in the topic. After we presented that in Lisbon, we realized that we could do something more with it.

And it's cool because I'm based in the US Midwest. But two of my partners are based in Germany. One of them is based in Brussels and one of them is a global nomad. So we spend all of our time collaborating like you and I are talking right now over online platforms. I think it's a great story. That's where it started.

KIMBERLY NEVALA: That is awesome. I would also like to connect with your global nomad and learn how to do that. That sounds amazing.


KIMBERLY NEVALA: I want to come back to the House of Beautiful Business at some point and ask about the tenants of a life-centered economy, which is fascinating and sounds lovely. But first, let's spend a little bit of time with the book. Was there a central question that ultimately inspired the 10 questions themselves?

SHANNON MULLEN O'KEEFE: I don't remember there being, I think there was just that call put out.

But if there was one it would have to be: does technology always do what we hope it will? Does it always serve its purpose for us? It's meant to actually make our lives better. I think it's the practical application of scientific knowledge. That's what we hope it will do. But does it always fulfill that purpose?

This is a podcast about AI. There's a lot of discussion right now about AI. What is the promise of AI? What is the peril of AI?

And I often think about that adage, F. Scott Fitzgerald, "Wisdom is the ability to balance two opposing mindsets simultaneously." And that's kind of where we are in this discussion about technology. So it would be just that: Is it doing what we hope it will for us?

KIMBERLY NEVALA: In our initial conversations, you noted that, yes, we should be applying this questioning to things like AI. But we could apply this to almost any technology. And it's not a once and done conversation. We should go back periodically and check on the technologies that have been out there and that we did deploy at a certain point. Are there some examples of questioning technology, both digitally and others, that you could share with the audience?

SHANNON MULLEN O'KEEFE: AI is the obvious one.

When you and I were kind of chatting in advance of this, I even made light of the idea of the light bulb. Which is obviously an extreme example, which is kind of a funny example. But, then again, when are we not doing as great with artificial light? Might we benefit as humans from more darkness sometimes? We're a stressed-out society sometimes. There's lots of talk about that. There's light pollution.
So I know it's kind of an extreme funny example, but it's something to think about. I think we owe it to ourselves to constantly have that conversation. There's the environmental impacts of that, too. So that is one for sure.

KIMBERLY NEVALA: That spawns that question when does the technology that was serving a particular purpose now start to exceed that initial purpose and no longer, perhaps, serve the useful purpose in the way that we intended? What are the boundaries of technology that is deployed and is useful in other ways?


KIMBERLY NEVALA: You mentioned the light bulb as a good example of non-digital technology. Are there contemporary examples of digital applications or AI applications that, again, showcase the types of questions we need to be asking as we develop and deploy these technologies?

SHANNON MULLEN O'KEEFE: A great example as we're thinking about AI, for example, is the idea of voice technology. I heard somebody using this example of the promise of it relative to the disease of ALS - where it may help somebody who's losing their voice to have a voice. That's a beautiful application of what could be the beautiful promise of AI.

But, on the other hand, we know there's this concept of deepfakes and digital fakes. And a voice may be used to con somebody out of money, for example. The example I've heard used is someone's mother's voice being replicated. They call: can I have money? And a person gives it over because they feel like that's a real person.

So that's two examples in which the technology can be used for a really beautiful purpose but also a nefarious purpose. And so, again, it's that idea of balancing that and making sure that we're having these conversations.

The other one that I was interested in that I've seen at my local grocery store was the example of palm technology where you don't need to bring your debit card maybe to the grocery store. You can automatically debit from your bank account. But as we project that into the future for people who maybe have a low bank account, do we really want our physical body connected to our bank? You can see the clear benefits of it. But then, again, what happens next?

This is, I think, where we get to our questions. One of our values - and we haven't gone there yet in the conversation - but one of our values is care and compassion which is really about caring for others and eliminating suffering. So the convenience of me being able to withdraw from my bank account: amazing. But what about people who that might not work for on down the line?

KIMBERLY NEVALA: It's a great example of what sometimes has been called the tyranny of convenience. Recently, I was on the line with my bank, and they said, can we take your voiceprint for security? And I thought, actually did say, hell no. Then I apologized to the poor agent who I yelled at.
But that's the least secure thing you could do right now. Never ask me that question again and you should stop asking other people, too. But…

SHANNON MULLEN O'KEEFE: True, true. But you and I, we're putting our voices out there right now as we speak. So our voices are becoming publicly available. Somebody technically could take them and replicate us. It’s an interesting thought.

I've flown recently and I haven't needed to swipe my ticket or my passport. It's just my face now to board the plane, which is, again, it's quite convenient. But what happens then to my data, the picture of me? And where is that data stored, and how is it used?

I was listening to somebody yesterday talk about national security on a program I was watching. She was talking about the benefits of AI for agents because they won't need to go through all the manual hours of kind of trying to put things together. Super easy to make all of that happen. Then again, those same skills can be used to surveil people. I guess we're talking about the both sides of the coin on all of these different issues and angles.

KIMBERLY NEVALA: Yeah, for sure. So let's go back to the book. In the book, there are five values that underpin the 10 questions. I know it's early days. It's early conversations about the framework. But when it comes to the values, based on the early conversations you've seen and the broader conversation in the ecosystem, do you think there are one or two of those values that organizations and decision-makers are going to naturally gravitate towards initially?

SHANNON MULLEN O'KEEFE: Yeah. Like we said, it's hard to see because we're just rolling out our book. But if I had to gravitate toward one I would probably say transparency and integrity. That's one that seems to come up as we're thinking about available information, generally speaking.

KIMBERLY NEVALA: That conversation about transparency is interesting because I've asked the question recently: is transparency becoming the new terms and conditions. Where it's 27 pages long and at some point, you just start scrolling because you want the application. So the idea of just saying we’re using AI. Is that really going to be enough? What does transparency really mean and is it a meaningful exchange of information so people really understand what they might be giving up in the course of that (interaction)?

SHANNON MULLEN O'KEEFE: It really is about how is something making the decision about me? If I'm applying for a loan, for example, what information went into a data set about me that might impact making that decision about me? So there is that kind of transparency, that human in the loop. And that ability to even ask a human that can ascertain what happened, what decision points were used as part of it.
The other part of our value, though, is integrity. We have that value. We pair it with integrity. It's not actually as much about the data integrity, which is just the accuracy of the data. It's how we interact as humans when we're using the technology. It’s that idea of, how do I act when somebody else isn't watching what I'm doing. That idea of doing the right thing even though nobody is watching.

One of the interesting conversations we had during our launch events while we posed this question and value to the group was somebody who talked about using a dating app. He talked about how he'll sometimes just pick it up - because these apps are almost gamified - and he'll scroll. And he thought, after thinking through our value, gosh, maybe these are actually humans that I'm looking at online.
It was just a different way for him to think of that. Which goes to that different idea of it's not data integrity, but it's how is it encouraging us to interact with integrity – if that makes sense. I thought it was a really interesting case example that he brought forward in our discussions.

KIMBERLY NEVALA: Yeah, it is. And it's a different way of - as you said, in the online realm and in digital realms, there has been really interesting research that suggests that we are a lot less kind. We are a lot less thoughtful. I've never been accused of being horribly tactful but we're a lot more blunt. We might say things in ways that we would never say to somebody when they're right in front of us. So this is a great example of scrolling through people - male or female - and fundamentally objectifying them because they've become a thing on a scroll. As opposed to there's a person behind that that is maybe not fully represented just by a picture.

SHANNON MULLEN O'KEEFE: Yeah, I thought his insight about that was actually really interesting.

KIMBERLY NEVALA: Now you mentioned transparency and integrity as values that I think have been discussed fairly broadly. They're common although not integrity quite as much. Transparency certainly is something that we bandy around in technology. Is there a particular value you think is maybe not necessarily undervalued but might be underappreciated amongst those five and is less prevalent in the common conversation today?

SHANNON MULLEN O'KEEFE: I really love that question, actually. The one I think of the five values that we offer is actually discovery. It's that idea that our technology encourages us to encounter the mystery and serendipity of life. And it's something that I think that we can forget.

I just spent time traveling as a part of this book tour. I used my phone a lot, Google maps to find my way around. And I was very grateful for it, by the way. I was on buses near Davos. I didn't know the area, so I was happy that I could know where I was. But then, on the other hand, we become so reliant on that, do we lose the kind of joy of just wandering or finding our way around? So what are we missing that we don't realize? That's a physical example in the real world.

Online, when I scroll and when I visit, then I'm served up ads. And suddenly, the algorithms are serving up things that I may not even realize that I'm missing by not exploring on my own. So of our five, it's one that I think about. It may be something that we don't as often think about.
KIMBERLY NEVALA: It’s great. And you can phrase that in the opposite way. Which is: by deploying this technology in this way, am I actually prohibiting people from discovering things? Or am I nudging them in (particular) ways? A lot of the times, that is purposeful. Let's be honest as technological developers. But perhaps we can challenge ourselves to do better.


KIMBERLY NEVALA: Now, let's talk about the 10 questions. I'm not going to ask you to rattle the 10 off. I would very much encourage folks to find them online. And even more so encourage folks to get the book. But can you talk a little bit about the context or the scope of the 10 questions? What's the sort of breadth of landscape they cover?

SHANNON MULLEN O'KEEFE: Well, there's two questions for each of our five values. So I will just say the five values because maybe that'll help to give the kind of the scope or the breadth.

The first is care and compassion which is really about caring for others and eliminating suffering.

The second is discovery. That's the one that you and I have just talked about. Encouraging the mystery and serendipity of life.

Then we have holistic thinking, which is about appreciating all stakeholders and all potential users, even future generations of users.

Then, we talked already also about transparency and integrity - the availability of information.

Balance is our final value, which is about the idea of sustainability and the continuity of the technology.

KIMBERLY NEVALA: You mentioned one particularly interesting conversation that came up: someone reflecting on their own use of the dating apps. Have any of the questions inspired other particularly interesting conversations or examples as you've been traveling around recently?

SHANNON MULLEN O'KEEFE: Yeah, that was definitely one.

I'm also thinking about somebody in one of our launch events who really challenged the idea that we would have values. With these values across cultures, it's always a great question. It always comes up. And it's an important question, I think.

The way we think about that… First of all, we are a group that came together, and we represent all kinds of kind of different backgrounds, and we were able to align on these five values. But then again, I think the important part of this book are the questions. Because those questions are the opportunity for people like you and I to have a conversation. So where we may not agree, the value is something that we can maybe tether our final decision-making to but the conversation is what really matters.
So that person talking about the dating app - because we've already talked about that - his conversation with the person that he was talking about that with, that led to that outcome, is the important part of this.

I really think, to be honest, it's what's important for our companies to be thinking about. I think of our book landing on that cultural level. Where the values are how we do things. Culture is how we do things. And so these values are that thing that help you to make decisions when you're in that moment.

I'm thinking about the article I saw yesterday in the New York Times. It happened to be about Boeing and the recent incident with the plane where the door blew off the plane. The article that I saw come across my feed yesterday was about -- this is not picking on Boeing, necessarily. It could be anyone. It happened to be them in the article here. But the plane left the factory without the bolts was the article or at least that was the headline.

A lot of times, if you're working in a culture where speed is the value, or efficiency is the value, then you want to get that plane out the door. But if you're working in a culture where you're using care and compassion - I'll just pick one of our team's values - you're thinking then about that plane as maybe the carer or the carrier of human beings. So before pushing it out the door because you want to do something fast, you stop and say, hey, are all the bolts tied in?

It's an example of how a value can be used in a cultural setting. That's where we really want this book to land. For leaders and teams to be able to have these conversations with each other so they're establishing that kind of cultural level of conversation.

KIMBERLY NEVALA: Does the book provide some guidance to decision-makers, leaders and teams on how to have these conversations or tools to enable those conversations?

SHANNON MULLEN O'KEEFE: Yes, actually we do.

We have some practical activities in the end of the book where we apply the questions to some case examples. I'll leave you to find those in the book. The we also offer up tools, exercises for teams to use.
So we do prompt people in terms of how you might use this with your teams. It’s a part of the book I actually really love.

We did this in our launch events. We had people apply the question of transparency and integrity. How does this app in that conversation encourage me to interact with integrity? Then people had the opportunity to have that conversation, which is exactly where I think we want things to be.

KIMBERLY NEVALA: I'm going to ask this next question, which I don't think is as contextually relevant to the example of Boeing, as you said.

But one of the pushbacks we frequently see more broadly, especially when we're talking about ethics, is this question of ‘what is the business of business’? What do you say to those that say that these kinds of expansive obligations, including moral and value judgments, just aren't the business of business?

SHANNON MULLEN O'KEEFE: [LAUGHS] That's a great question. Personally, my own opinion is that they are the business of business. I think ethics is the business of leaders. Leaders, it needs to be something that we all own. Doing the right thing matters. It does. That's my own personal opinion about it.

BCG (Boston Consulting Group) contributed a chapter to our book. They did a study with 2,700 companies. It was their Responsible AI Digital Acceleration Index study. And they found in that study that 90% of companies were willing to do ethics or to care about ethics if their customers did.

So one take is, just do it. Do the right thing. But the other is that it matters. If it matters to your customers, maybe that's the reason why to do it. But the other thing can be just the consequences – the reputational consequences of not getting things right - can be really grave. So there are a lot of different avenues or angles into why it matters to do the right thing. One may be just because you believe it is the right thing to do. One might be that your customers believe it's the right thing to do. And one might be the reputational damage. But any way you go, I think it matters.

KIMBERLY NEVALA: Is there a lesson in there for us as individuals as well?

I know you are a huge proponent of the power of the collective and the power of collaborative teams to get a lot done. As individuals, we may sometimes feel insignificant. Or feel like raising our hand doesn't achieve anything. But what you just said is companies care when their customers care. And the question that brought up is, well, they don't know that their customers care unless we, as their customers, say so.

None of us want to be whiny and complain and sometimes we just want to get stuff done. But this seems to be a call to arms for each of us individually to make concerns known. I'm the pain in the butt in the TSA line that doesn't let them take my picture. It does not take them any longer to screen me, by the way. But they sure roll their eyes and I'm OK with that.

There's a tendency not to want to push back at all, especially as individuals. Or to think it's just not going to matter. And what I, perhaps incorrectly, am taking away from what you've said is that, no, it matters. And every voice helps.

SHANNON MULLEN O'KEEFE: It does matter. And we don't want it to only be the end user, only be the consumer. But I think we do owe it to ourselves to say something when we believe it. Or to be like you are and to refuse. I think it does matter for companies to get it right. And I think they ought to know if it's not something that we care about.

KIMBERLY NEVALA: You've also said that the power to create is an awesome power and with that just fundamentally comes the responsibility to do the right thing.
The other side of this conversation sometimes is, especially with AI, this is a general-purpose technology. We can't possibly try to understand what everyone's going to use it for. It's not my job to constrain or think about all of the ways someone (else) might apply it. That's their job. There are really good applications of this technology. And yes, there's always going to be bad.

There's some balancing equation that we're doing. But the potential good always outweighs potential bad. No matter how nebulous the good might be and how clear or convincing the bad might be.

Do you find yourself talking to folks about wielding their power, this power of creation, through AI carefully and mindfully?


I mentioned in some of our earlier conversations Susan Liautaud, who is an ethicist at Stanford, and her book The Power of Ethics which I really love. I like the idea that she puts forward that even though something may be a best practice now, it doesn't mean that it's without question. We're fallible. So as humans, I'm fallible. I've never met one that isn't a fallible human.

So we owe it to ourselves to consider our best practices and that means having an ongoing conversation. It means, as a leader having a group of challengers around you who are always willing to challenge your current way of thinking, challenge the status quo. It's something that we, always need to be having conversations about.

And so, as you said that about AI, we may not know exactly what the detrimental effect is now. But when we do know, at that moment, then we owe it to ourselves to make a different choice. And to hold each other accountable to that different choice.

One of our questions is: can you shut it off? This is the question at the very beginning. If you and I are tech creators, and we're creating something, can we shut the thing off if we need to? That's one of our questions I happen to love. I think it's an important question, particularly thinking about something like AI.

KIMBERLY NEVALA: Yeah. The corollary that spawned for me was not just can we shut it off but will we shut it off? Will we summon the will to do it in this situation. Deciding what that is before it happens might be equally important. Now, in the last few minutes here because I could go on for a long time…

SHANNON MULLEN O'KEEFE: I could, too. This has been fun. Thank you.

KIMBERLY NEVALA: [LAUGHS] I found you and your work through some writings that were related to both the House of Beautiful Business and your work on IEEE's Advancing Technology for Humanity program where you're on the Rivers and Lakes Committee.
For you what was the intersection between that work and the work that you're doing now? I know you're passionate about nature and tech. Some might assume that you're just interested in the environmental impacts. But what brought you to that work with IEEE?

SHANNON MULLEN O'KEEFE: Interestingly again, it was about being a part of a community. We've talked about that. I really do think that when people get together they can make a difference together.

Initially, John Havens, who happens to be a member of the House of Beautiful Business community too, put out a call and I answered that call. It's been one of the greatest effects of that community - the network and the teams that I've had an opportunity to intersect with. Although the clear output is obviously the "Strong Sustainability by Design" document. Which is about looking forward at what technologies can help us to, I guess, aid our planet long term.

But we talked about what is my motivator from that perspective. Sometimes we can make the motivators for things like this maybe more complicated. For me, it's the beauty of nature, to be honest. I grew up in central Minnesota. It's the land of 10,000 lakes, if you know that. I spent a lot of time at Lake Sagatagan, which is a lake that's not too far away from where I grew up, where my parents still live. It's just beautiful. Nature is beautiful. To me that's why I would want to contribute to that effort. In addition to my thinking about technology, I just love being out in nature.

KIMBERLY NEVALA: What do you think those of us who are responsible for tech deployment and development can learn from conservation and from nature in general, or from conservationists?

SHANNON MULLEN O'KEEFE: I think I mentioned in one of our earlier conversations, a guy named Alan Booker (I think his name is Alan Booker). Alan presented as a part of the IEEE during one of our town halls or something like that. I love that he put up a slide of a weed, like a plant, but he called it elegant technology. I thought that was such a beautiful framing because he went back to nature and a plant as a source of technology and something that could be inspiration for us as tech creators.

Sometimes we think we need to maybe come up with something completely new. But then again, maybe we just need to look at what's at our feet. To consider the weed, as he did in that case. I thought it was really brilliant. So that's what I would say. I think he would probably consider himself to be a conservationist. I don't know but I loved his framing of that.

KIMBERLY NEVALA: This goes back all the way to that question or that example you had of the light bulb. May we owe it to ourselves to sit in darkness a little bit. Not metaphorical darkness but actual darkness and enjoy the world around us a little bit more and that'll ground us.


KIMBERLY NEVALA: Any final words of wisdom or observations you'd like to leave with the audience?
SHANNON MULLEN O'KEEFE: I really appreciate your interest in our team's book. It was, in many ways, a labor of love. We're just getting started putting it out there but we think it can make a real difference for people. So I do hope people will visit our website 10moralquestions.com to check out the values, to check out the book, and to engage in conversations about the questions.

KIMBERLY NEVALA: I consumed the book in one sitting and it was fascinating. I've had a lot of these conversations. But it frames them in a way that's very interesting and very challenging.

Thank you, Shannon, for sharing your thoughts today and inspiring all of us to ask more questions and to actively participate in what is, after all, the collective business of life. I particularly loved the inspiration we and tech can also draw from nature, which you brought to the fore for me. So thank you again for your time and all the insights.

SHANNON MULLEN O'KEEFE: Thank you. It's been great to have the opportunity to talk with you, Kimberly.

KIMBERLY NEVALA: To continue learning from thinkers and questioners such as Shannon about the real impact of AI on our shared human experience, subscribe now.