Pondering AI

Kate O’Neill champions strategic optimism, embraces nuance, rejects false dichotomies, calls for mental clarity and agility, empowers with empathy and anchors human-centric innovation to meaning.

Show Notes

Kate O’Neill is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale.  
In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.

Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.

A transcript of this episode can be found here.

Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.

Creators & Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
Kate O’Neill
Founder and CEO of KO Insights

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

[MUSIC PLAYING]

KIMBERLY NEVALA: Welcome to "Pondering AI." My name is Kimberly Nevala, and I'm a strategic advisor at SAS. I'm so honored to be hosting our second season in which we've been talking to a diverse group of thinkers, advocates, researchers, and doers, all working diligently to ensure our AI-enabled future puts people and our environment first.

Today, I'm beyond excited to be joined by Kate O'Neill, the founder and CEO of KO Insights. Known as the Tech Humanist, Kate is a linguist, an author, and a strategic advisor committed to improving the human experience at scale. Welcome, Kate.

KATE O'NEILL: Thank you, Kimberly.

KIMBERLY NEVALA: So I think we have to start by asking, how does one become a tech humanist?

KATE O'NEILL: Well, it's something that I have been thinking about for a few decades as I worked in and around technology. Because it was always the case that I was always the sort of pain-in-the-butt member of the team who was always saying, but what about the customer? But what about the user? But what about the person who's using the thing that we're building?

And people ultimately appreciate that. But they also find it a nuisance in the short term. So I think what I found is that there's a way to think about the human experience, the superset of all of those-- the user experience, customer experience, and so on-- and the way that technology can be built to accommodate the human experience so that it actually makes the human experience better.

And that felt like a perspective that really wasn't being amplified enough. And so that's been the focus of my work for several years now, including my second-to-last book, Tech Humanist.

KIMBERLY NEVALA: Yeah, this is awesome. And you mentioned your book. You've actually written three: Pixels and Place, The Tech Humanist, and A Future So Bright - which I just read. It was awesome. And each, I found, documents a key inflection point in our evolving story - or the evolving story - of our relationship with technology. So will you give us an overview of the evolutionary arc that's captured in those books?

KATE O'NEILL: Yeah, it's a really good question because I actually felt like I needed to ask myself that question as I was working on A Future So Bright. It was like Pixels and Place was clearly about the interconnectedness of physical and digital experiences and, in fact, the interconnectedness of the physical and digital world. And that world connects through our human experiences.
Like everywhere that we move through the world, we are stitching together in this data mesh what those worlds know and understand about each other. And that's-- so that was really intriguing to me. But it also started to open up much bigger questions about if we have this connected world, who's creating it? Who's designing it? What are the responsibilities of the people who are creating those experiences?

And that examination led to Tech Humanist, where it was this discussion about it's really a business question. Business is generally who's creating the majority of human experiences. And the motives are often profit-centric. And so if they are, then we need to find ways to make sure that those profit motives are also aligned with good human outcomes, and make sure that we can be building technology through business experiences in ways that can amplify the human experience.

And so that work led to an even bigger question, which was now that things feel like they're moving in this direction where we also just feel like we're kind of living in technology all the time-- like it's in the water, it's in the air that we breathe-- what does it mean to face what we think of as the future, as this future of data and emerging technology, in a way that reminds us that we're empowered to create it? And so one of the big pieces, of course, as you saw reading A Future So Bright, was talking about climate change and the climate crisis. And that seems to be one of the most urgent and important discussions that we can have as a society.

And a lot of it does pertain to technology. And a lot of it pertains to business decisions. And so these all weave together in ways that I think don't seem really readily apparent at first. But they are the recurring theme. As you saw, through the book, through A Future So Bright, is everything is connected.

And that is, I think, a really important point for us to take away from all of those discussions is that we are making decisions today that affect future outcomes. And we have the agency and the empowerment to do that in a responsible and forward-looking way. So I look forward to us doing that.

KIMBERLY NEVALA: Me, too.

KATE O'NEILL: Yeah.

KIMBERLY NEVALA: Now, you're a linguist. And so you're very good with language and very aware that language matters. So I want to play a little bit of a game, perhaps, of: why this and not that? With full acknowledgment in advance that this may in some cases be in fundamental opposition to the "both and" principle you strongly and convincingly espouse. So we'll get to that.

But why human experience and not customer experience?

KATE O'NEILL: Yeah, I mean, I think customer experience is a valid thing. It's just that it's not the be all, end all. And I feel like all of the disciplines of customer experience, user experience, patient experience, guest experience, they all provide a really useful lens on the transactionality that takes place within the context of the interactions that are being designed for the business that they operate within.

But there's a bigger picture. And that bigger picture is a really important context to bring to that understanding as well because it brings more empathy and more nuance about what people carry with them when they encounter these interactions and experiences. So we can design customer experiences.
And we need to be thinking within the customer experience design framework and methodologies. We need to be thinking about making sure that relevant information is at hand and that we're making things frictionless and easy and all of the things that we do within customer experience.

But then we also need to be able to zoom out wider to the human experience perspective and be thinking about what kind of empathetic situation can we be aware of? What kind of baggage is someone bringing into this moment? What might be the emotional burden that someone has when they're visiting a loved one in the hospital, and trying to make sense of the hospital's map that's available on their website? There's a very big difference between designing for transactionality in that moment and designing for true empathy of what the human experience there is.

So I think there are a lot of ways that plays out across disciplines.

KIMBERLY NEVALA: Yeah, and as you were talking, it strikes me that when we think about customers or users, they're recipients of - as opposed to participants in - in a lot of cases.

KATE O'NEILL: True, good point.

KIMBERLY NEVALA: So this probably then leads a little bit to the next question. Why value and not profit?

KATE O'NEILL: Yeah, so profit is a metric, right? It's just a ratio. It's the fact that you took in so much, and it cost you so much. And that, for some reason, has become this all-influential metric that everyone seeks. And it's a very useful metric. It indicates healthiness in a company's operations. But it doesn't tell you everything.

It doesn't tell you whether you actually delighted someone, whether they were happy to give you their money in the end, and whether they would come back and do it again if they had the choice to. And so I think there's a bigger question of where are we actually solving human problems. And how could we do that in a way that is more understanding of what is meaningful to the people who are interacting with us at that moment?

So understanding meaning is a key to understanding value. And I think once we get there, once we get to that point where we know that there's a problem that someone has, and we are empowered to help solve it, then we're truly understanding value. That leads right into profit, by the way.

In general, these kinds of businesses that create meaningful and valuable experiences tend to create profitable ones, too. I just think that having that profit be in the front of that discussion is kind of the tail wagging the dog. And we need to be much more focused on the human aspects of that than on the pure raw ratio metrics of it all.

KIMBERLY NEVALA: And I want to come back to that topic of meaning. But first, I have to ask the last why this, not that. Why optimism and not utopia or even dystopia?

KATE O'NEILL: Yeah, so it's funny. I think of-- optimism is a term and a concept that I think gets really maligned. And it's been understood to be this very naive kind of principle.

And in my mind, it is actually possible that optimism can be naive. But my object in A Future So Bright is to present the idea of strategic optimism, which is to say that you look for the best possible outcomes. And you design a plan to get there.

And so that's why not dystopia or utopia either because this dichotomy of dystopia versus utopia is all that we've really been given in culture in literature and science fiction to think about the future. This is a very limited, and unhealthy, and un-useful framework to think about the future. It reduces us to victims in our own future, as opposed to reminding us that actions have consequences. And we make decisions. We make actions. And those affect the outcomes.

Of course, we're affected by other people's decisions and actions, too. But we have a lot more agency than our discourse about the future would suggest. So I think it's really important that we bring that optimistic, but strategic, perspective to it.

KIMBERLY NEVALA: Yeah, this feels so important, especially right now, where we're at a time where it seems to be more and more in fashion, in vogue to hold or acknowledge one idea and one idea only, or one opinion and one idea, come hell, high water, facts, data, science, or common sense, what have you.

So the idea of-- one of the things that you talk about with strategic optimism and this idea that it's not about rose-colored glasses. And it's not just about leaning into the bad, is we have to learn to hold multiple, often opposing ideas and be able to think about those at once. And I started to think about this as almost taking a quantum view if you will. It almost feels in today's discourse like a heresy. So can you give some examples of why that mindset's so critical in light of these very complicated issues we're addressing, both with and due to technologies such as AI?

KATE O'NEILL: Yeah, I think it's really important, especially when it comes to technologies like AI, because I think we have to be able to recognize that there are true risks and harms with emerging technologies like AI, that the power that these technologies have and the capacity that they have means that they can amplify and scale our biases as well as our values. And that can happen so fast. And it can happen in such a dramatic way, while at the same time we acknowledge that there's tremendous power and capacity for scaling good.

And I have a whole section I'm sure you saw in the book. I talk about the United Nations' sustainable development goals, and the ways that artificial intelligence in particular can be deployed to solve against any one of those 17 goals. And that, to me, is tremendously exciting.

So it's a "both and" for sure. We have to very much acknowledge the very real harms and risks: particularly harms that are happening upon certain communities of people. In many cases with things like facial recognition and other surveillance technology, it's Black and brown communities, and that we have to be very careful to really, truly acknowledge that and work toward reducing those harms. By also involving people from those communities, affected people, and having them be part of the leadership in the discussion about how these technologies and how these programs actually roll out. That's an incredibly important aspect of it.

And we also need to be looking for, hey, what's the ways in which this can actually solve problems and we can deploy these technologies so that they can use the capacity that they have. There's things like optimizing wind farms so that they can be more efficient and stopping human trafficking. And there's just so many programs that roll out that AI actually helps with.

So it would be impossible to categorize AI as good or bad. It is simply the case that we have to be very disciplined about how we use it. And having this "both and" mindset, this flexibility of mind that can allow us to recognize when it is in a position of making harmful unintended consequences and when it is in a position of bringing good to the world and scaling that good. That's where I think we can do the most good.

KIMBERLY NEVALA: Are we good at taking those, and having those conversations, and taking those positions today, and bringing those to real life? And what can we do to better encapsulate that mindset?

KATE O'NEILL: I mean, I think that there are lots of people who are working around this space. There's the responsible tech movement, for example. And the AI ethics movement is burgeoning. I know of an awful lot of people who are moving into that space. I know of a lot of very, very credible scholars and academics in and around that space. There are a lot of very trustworthy voices out there around this.

So I think that in that regard we have the opportunity to be very good at this. But, yeah, I mean, to your, I think, more implied point, sure, like socially, societally, we have tended to have some difficulty with nuanced discussions. And I think that that's a shame.

And it's something that probably it's not a mystery to think that social media has divided or further made that difficult because of the divisive polarity of the experience in social media. And we've been hearing that a lot this past week with the whistleblower from Facebook, and the discussions about the algorithms amplifying certain kinds of content. And we know this already, though. We really didn't need the whistleblower, although it's wonderful to have that information.

But I write about this in the book, that we know that certain algorithms on social media amplify for certain kinds of engagement, like outrage. We know that the anger reaction, for example, on Facebook is going to tend to get you more of the same kind of content that you reacted angrily to because anger is a very engaging emotion. So we know this. And I think this is just a matter of really kind of doubling down on media literacy and digital literacy for people.

We need programs that can actually get people savvier about the ways that they engage with content online. We need to have programs that are much more in touch with the realities of what we really encounter when we spend time online. But I think that there is hope in the form of those scholars, and academics, and people working in the space. And I think we have some work to do, for sure.

KIMBERLY NEVALA: Yeah, and your comments right there also remind me of something I think you have said, and others as well, that it's so much easier somehow to lean into anger and to lean into rage. Or to be pessimistic or realistic. And that, as you said, optimism is actually quite hard for something that people think is sort of soft and squishy. This is a really hard concept. Now, someone could say, well, you're really just trying to have it both ways, right? You're trying to say, it's really good and it's really bad. And that's sort of all OK or trying to play both sides.

But how is this idea of being able to look at things comprehensively and acknowledge that things can be good and bad, and there's truths on both sides of that different than this both-side-ism that is out there as well?

KATE O'NEILL: Yeah, so I talk about in the book that both-side-ism is a really disturbing trend that's very different from "both and," because what both-side-ism assumes is that if you make one statement, that by default the opposite of it must be acceptable or must be true. And it's not generally the case. There are many, many sides to many truths. And I think that our innate human understanding tells us that that must be true, that there are often more than one way to tell and experience a truth.

But that doesn't mean that if I say the world is round that that means that the world must also be flat because we're accepting all truths here. And so we need to allow for a certain amount of reckoning with scientific fact, or at least accepted scientific knowledge and the scientific consensus. So we can see this play out in the pandemic. We can see this play out in a lot of the misinformation and disinformation issues that are out there. And we can see it play out around the tech space as well.

So I think we just have some work to do to make sure that we're learning and unlearning the right kinds of approaches to nuance, and how to respect one another's viewpoints but not-- and not necessarily set up this false approach that just because this thing is true, then inherently it implies the opening of this opposite space. And that has equal truth to this thing that has the scientific-- the consensus of all of the respected scientists in the world. That's not how that works.

So "both and" is never supposed to be about that. "Both and" is about the idea that we have ways to approach the tools and technologies that we face, and the challenges that we face with the resources that we have that can offer us plenty of forward paths, that can offer us a way forward that is bright and better. And we also have to recognize that there are harms that come with some of those things.

So rolling out facial recognition, for example, in communities where accuracy is a question, and where you have white supremacy, you have systemic racism, for example, there are cases-- many, many cases-- of criminal justice misfires where facial recognition information is making it from surveillance technologies all the way into criminal court proceedings without there being really any credibility to the fact that the person who's being cited as indicated in the image is really that person.

So we just don't have nearly enough teeth around the ordinances and the regulations of those tools and technologies. We have a lot more to do where that's concerned. But I think this "both and" area is a really important framework for us to just be able to recognize the harms and the good that come from these technologies.

KIMBERLY NEVALA: Yeah, and another, I suppose, learning I've had, or we've collectively had, especially as we've had a lot of these conversations on the podcast and otherwise, is this perspective. A lot of what you're looking into is identifying what is the future-- the future that we want to be. And that, in a lot of cases, is a break from the past. And a lot of these technologies lean on data from the past. And so we really have to think critically about is that actually what I want to project? And is it possible to do it in this way?

KATE O'NEILL: Yeah, I mean, the bias exists not only in algorithms, which is, I think, where a lot of people talk about algorithmic bias. But we know that we're also talking about bias in data sets. And we are talking about bias in the very existence of the job roles that have created the data sets and the algorithms.

So we have to unpack a lot of that, all the way down this supply chain of how this stuff was created. And we need to know that we're probably never going to unpack all of that. But we have to be thinking about how do we mitigate the risk, and how do we mitigate the harms?

Some of that has to do-- there's some really interesting innovation I'm sure you've seen around AI that can actually attempt to unpack the biases in other AI. And so having this kind of discourse, and having this challenge out there, where we can use technology to actually try to improve technology, I think that's an interesting way forward.

KIMBERLY NEVALA: That will be interesting. And I think that balance again of-- for those of us particularly that are technologists-- to not fall into the techno-solutionism trap as well. Which is technology can certainly help us with technology, but it may not solve all of those pieces.

Now, you mentioned innovation. And way back up front in this conversation, you mentioned the importance of meaning. So can you talk about why an emphasis - or more of a focus – on really defining meaning can help us as individuals and organizational entities power greater innovation?

KATE O'NEILL: Yeah, so this is one of my favorite topics within all of the work that I've done in the last few decades.

I've been a fan of and student of the concept of meaning for decades, since I was a kid, really. I think the first time it dawned on me was I was fascinated with languages. I was studying Spanish on my own and French on my own. And I liked the idea that you could take a book, and it would be a book. But also, it could be a [NON-ENGLISH], or a [NON-ENGLISH], or a [NON-ENGLISH], or whatever. And the fact was that there was still that object, but it had all these different words that applied to it.

And that made me really think about the separation between the thing itself, and the way we talk about it, and what did that mean about the way we talked about it. And so that's pretty deep thoughts for, I guess, a 7- or 8-year-old or whatever. But it led into a lifelong obsession with meaning.

And so now I think about meaning as truly one of the most, if not the most, foundational of human constructs. I think that it is what makes humans human, that we are so obsessed with meaning. We are meaning-seekers. We are meaning-makers. We crave meaning. We thrive on meaning. We puzzle it out everywhere we go.

And I think you've got to think about meaning as this kind of scaffolding from semantics, like I was just talking about, up through relevance, and truth, and patterns, and purpose, and significance, and all the way out to the most macro existential and cosmic big-picture questions of what's it all about and why are we here.

And when you think about all those different layers of meaning, I think it's really fascinating that you can all take all of them and condense them back down to this question of, what matters? Like in every part of those discussions, you're really asking some kind of question about what matters. Meaning is always about what matters.

And then when you apply that to innovation, and you think about technology and how we're going to solve the problems of tomorrow, it really can be thought of in this incredibly human-centric way by using that same lens of meaning and saying, innovation is about what is going to matter. And that may seem really simple in words.

But when you actually use that lens and start to think about what matters today and what's going to matter tomorrow in this meaning and innovation framework, it gives you the opportunity, I think, to really align the way you're thinking about the trends that lead up to this moment, and what that trajectory looks like going forward, and how you can bring resources together, and how you can make sure that your business focus and your resource focus are pointed in the direction that are going to make the most impact, that are going to have the most meaningful impact. So that's an incredible set of tools, I think, for people to be able to look through.

KIMBERLY NEVALA: So it sounds like this is a method, too, to build a bridge between what can seem like fairly conceptual, sometimes soft, or just a little bit fuzzy human-centric goals and business strategy: which is very concrete and something to do. So is that true? And is there an example you could give of how that helps us mind those gaps?

KATE O'NEILL: Well, I think the example that always comes to mind-- yes, I think that's true. I think that's why for me the term strategic optimism has so much resonance, because I have been lumped in with futurists a lot in my last several years of work. And I don't resent it. I don't mind it. It's a lovely, I think, compliment.

But I don't really think of myself so much as a futurist as I think of myself as a strategist, an experienced strategist, and one who spends a lot of time on insights and foresight.

What I think is interesting as an example of what you're talking about in tying together these kinds of human-centric human experience discussions that, as you say, can be sort of philosophic or abstract with business strategy, which also can be kind of philosophic and abstract at times--

KIMBERLY NEVALA: It can.

KATE O'NEILL: --there is a favorite example, a favorite story of mine from back when I was at Netflix in the early days, when it was like 2000 or so. And we were still in all-out bloody battle with Blockbuster.

KIMBERLY NEVALA: Ah, Blockbuster.

KATE O'NEILL: Still the upstart. Still no guarantee that we were going to emerge victorious from this whole thing. And Reed Hastings was already investing a certain amount of research and development dollars into what were then called set-top boxes, the predecessor to streaming as we know it.

And that didn't come out-- Roku as a standalone device didn't come out until 2006. And the dedicated streaming plan on Netflix didn't come out until 2007. So you have six, seven years of anticipatory foresight, that strategic planning of Reed diverting R&D money toward the next thing.

With no guarantee we were going to win the here-and-now battle, he's already investing in how do we make sure we're prepared to be on top of the next battle, the next innovation. And I just, I find that to be one of the most inspiring stories I've ever had the opportunity to observe firsthand, and to really take with me and share with leaders often.

And I think that what resonates for me about that is very much the what matters and what's going to matter lens. It's the one eye on the now, one eye on the distant future.

But it also connects the reality of you've got to be led by what it is you think that your business actually does, not by the technology. If he was led by the technology or the operations of the business, then we would be still renting DVDs. Or Netflix, not we. I left there a long time ago. But they would still be renting DVDs.

And obviously, what they existed to do was to provide entertainment to people at home. And the way to do that was going to change. And he could see that that-- the leadership could see that that was going to change.

So I think I've, just in the last couple of weeks, I've been doing keynotes with financial organization leaders, and leaders in the health care space, and so on. And it's very interesting to translate these kinds of insights across industries, and think about digital transformation, and think about how COVID has changed the digital transformation landscape.

But fundamentally, it's the same story. It's the same one eye on the now, one eye on the future. And how do you make sure you're rallying your resources to solve for what the business or what the organization exists to do and not be led by technology, not be letting technology dictate what it is that you are offering up into the marketplace?

KIMBERLY NEVALA: Yeah, very important. And once again, I think you're pointing to (the fact that) we can't just get (compliant) and look through one lens or take just one perspective. It's multiple perspective, multiple lenses.

Another area I've heard you speak about that-- and I would love for you to talk to us a little bit about this-- is being able to look at this at the micro and macro scale of people. So when we're thinking about how do we engage, or a product or service, or what it is that we're trying to achieve, thinking about both the human and humanity. Why is that important?

KATE O'NEILL: Yeah, you know, it's funny. I think COVID was a real clarifying lens for me on this, too, that when-- in the majority of my work, I feel like I've been focused on the humanity level. That's kind of what it feels like I get hired to do is help companies, help leaders think about solving problems at scale, and thinking about how humanity is affected by emerging technology, and that sort of level of abstraction.

But when we got to the beginning of COVID, it really felt like there was so much disruption to people's everyday experience. And there was so much upheaval and so much anxiety about it that, really, it became abundantly clear to me really quickly that what I needed to do was focus on the human level, the very, very real in-the-moment human level of what the suffering was, and what the problem was, and what the need was.

And so reaching out to my clients and my associates and just figuring out, how can I be helpful to you right now? What can we do to make sure that you're OK for the next weeks, months, whatever? And how can we make sure that you feel good about this transition? Because I know a lot of us, from keynote speakers, to the event industry overall, to obviously health care, obviously education, so many industries had to pivot so hard, so fast. And it was-- it threw everything into a tailspin.

So really thinking about, yes, we can talk about the digital transformation discussion. We can talk about how we're going to operationalize some of this pivoting that's happening. But also, I want to make sure that you're OK. I just want to make sure that you as a person, and your family in your home are going to be OK today, and tomorrow, and next week.

And that, I think, that-- I think a lot of people experienced that. I get the impression that we, many of us, got to feel the very real urgent sense of that. And I think that that's a really good lens to bring to our work when we're thinking about human experience, and digital transformation, and big-picture strategy. It helps to think about it in a really big scale because we want to make sure that we're set up for scale. We want to make sure that we're set up to do things at the most macro level possible.

But we also need to be able to do that zoom in, zoom out, and make sure that the actual humans, the actual people who are part and parcel of those discussions, that they're OK, that they're on board, that they have-- there's a lot of cultural problems and cultural resistance that comes along with digital transformation. And that's one of the things that we can take away from this is to remember to check in with our colleagues and with our clients when we're dealing with big, massive change and transformation.

People need to be able to adjust and adapt to the new landscape, the new reality. So I think it's a very empathizing and humanizing kind of experience to be reminded that we can't only talk about the biggest scale and the most abstract, most ambitious outcomes. We also have to make sure that we're getting back to that really specific one-on-one human experience as well.

KIMBERLY NEVALA: And just another great example of the "both and" concept in principle, thinking about both the collective and the individual. And assessing both of those views, I think, helps us get to smarter decisions and better thinking about what the future really looks like, because we're not going to optimize necessarily one at the expense of the other.

KATE O'NEILL: Exactly. One doesn't substitute or supersede the other. We can't say, oh, OK, now we're only going to do one-on-one human-level experience strategy. That won't work. We have to be able to have the agility and the mental clarity to be able to do that exercise of going back and forth between one level and the other.

KIMBERLY NEVALA: Now, in that vein, you encourage people to spend more time thinking critically about the implications if things go wildly right at scale, as opposed to just thinking about what might go-- what might happen if this goes wrong. And it feels almost counterintuitive.

KATE O'NEILL: Yeah.

KIMBERLY NEVALA: But it does echo something David Ryan Polgar recently said to us, which is, many of the unintended consequences with AI have been a failure of imagination. So can you talk to us about what happens when we don't spend enough time thinking about the implications about being wildly successful?

KATE O'NEILL: Yeah. Yeah, I think this is one of those things that once I explain it to people, a lot of the time people almost do this forehead slap and go, wow, I can't believe we haven't been having that conversation. In many organizations, most organizations I have found, when you're starting a new project or program, there's often this period of thinking about what are the risks? What could go wrong?
And you document those, and you plan for those. And that's good. And you need to have that.

But we also don't, in many cases, talk about, OK, what if this goes really, really right, and it goes so crazy successful that it takes our servers down? What if it goes so well that we have trouble with our existing vendors and suppliers, and we need to make sure that we've got a backup of those? What if it goes so well that people are trying to sign up for the system and they can't get in?

There's just-- there's any number of ways that-- it's kind of a what could go wrong. But it's a what could go wrong in the lens of what could go right. While it's going so right, you have to be prepared for the possibility that things go wildly successful.

And I think that that is probably-- it probably feels like fantasy to many people when they're at that moment. But things happen. Things take off. And oftentimes, some of the most catastrophic outcomes have been from just not-- as David Ryan Polgar, one of my favorite people, said, as you said there, from that failure of imagination, from thinking-- from failing to think about-- let's say you're launching a food product. And suddenly, Whole Foods decides they want to carry it in all 300 or whatever of their stores. And now you've got a scale problem. It's a delightful, world-class scale problem. And good for you. But it's still a scale problem.

And if you think from the beginning, gosh, we've got to be prepared for that moment when Whole Foods decides to pick us up. Let's make sure that we are familiarizing ourselves with all of the suppliers and distributors that we're going to need to know when we get to that point. Let's make sure that we understand how we're going to quickly ramp up our production, how we're going to do all the things that it takes.

And on the AI side, it's exactly the same. It's thinking about how the little decisions that we sometimes encode because we're trying to get things done quickly, and we're trying to kind of sloppily put something together just to see if it works, to rough it together, some of those things accidentally end up sticking around through production.

And we end up unintentionally scaling something that wasn't very well thought out. And sometimes those can be intake forms and questionnaires. Or when you have people who have to sign up for something, and the onboarding process hasn't been particularly well thought out, it's not an experience that really reflects what's meaningful and valuable to the company who's creating this thing. It's not the experience they want people to have.

Or the ways in which you-- so think about Clubhouse, for example. This is one that's really fun, when Clubhouse was really hitting its first big growth inflection at the beginning of this year. And at the time, that it was like 10 invitations, I think, that everybody had. But you could only send those invitations if you shared your entire phone address book with the app. And the privacy risk implications of that are obviously tremendously huge, that no one who has any concept of data privacy should want to share, just flat-out share, their entire address book with an unproven app.

And so sure enough, they finally disconnected that. But those are the kinds of decisions that once they reach a certain level of scale, there are implications in terms of data risk. There are implications in terms of breaches and the liability that the company takes on for themselves.

But it's also just a cue to the users, a cue to the people who are using the system that say, we don't trust you to-- with these invitations. We don't-- we're not feeling generous enough about them that we just want to let you invite people. We're asking you to make this trade in return that's a really unfair and unreasonable request.

And so I think that those are the kinds of things that probably people just don't think about. But if you do think from the standpoint of like, well, if there's a lot of people using the system, what's it going to mean for us if we have this in place? So I think there's all kinds of ways we can challenge ourselves to think like that.

KIMBERLY NEVALA: Yeah, and certainly plenty of examples of perhaps if we had thought a little more about that. And social media in particular is probably the most pervasively obvious one now about, well, what if this does become the primary way people engage, or the primary or sole way they get-- form opinions or get their news? What does that mean? And maybe having some of those conversations earlier might have informed a slightly different approach, or maybe will moving forward.

Now, you've been an advisor for a long time. I have been as well. And years ago, we used to talk about change management as a discrete process that had a start and end point. You were trying to get an MT from A to B.

Today, I think we speak more of managing change as an ongoing practice. And, indeed, habituating the change is a key pillar of your brighter model. So in an era where, again, changing your mind is often perplexingly seen as a negative, how do we normalize the idea of habituating to change? And why is this important for individuals and organizations?

KATE O'NEILL: Yeah, I think it is, as you say, it's just the norm that we're in right now, that things are changing for us so fast. There was a stat that I was quoting for a while, and I think it's out of date by now. But even just a couple of years ago, 70% to 80% of CEOs felt like the next three to five years were going to bring about more change than the past 50 years had.
And again, that I was quoting probably two or three years ago. And I would imagine it's irrelevant now. We probably have to accelerate even the timeline of that statistic.

I think we all just kind of feel it. We feel the rapidity of change. We feel the accelerating sense that-- climate change, for example, the climate crisis-- anyone who's been really paying attention for the last few decades has seen these benchmarks happening, and the movement and the progress of the warming indicators. And the things that climate scientists pointed out were going to happen started happening. And so it's not a surprise.

But I think to the general public, the sense has been we still have time. People are going to handle it. Leaders are going to step up. And I think to see what's happening, where we've reached such a tipping point, where we're starting-- I think the general public is beginning to be more aware of this sense that we cannot set certain things back. There are things we can do to make sure that we minimize the harm that's coming. But from this point forward, there are certain things that are unfixable.

And that means that we're going to continue to see accelerating change, accelerating catastrophe, and climate emergencies around the world, as we've been seeing more and more with more frequency in the last few years-- wildfires, hurricanes, floods, and so on.

So I think even that alone, because it happens in the world we actually can live in, and observe, and witness, and sense, I think that feels like the most obvious harbinger of this constant change. But then I think also, the whole sense of automated intelligence, and what that means for the future of work and jobs. And of course with COVID shaking up the entire job landscape and work landscape, and what does the future of the workplace look like? And what does it mean to have these remote and distributed teams?
And how is that going to change the way that we go back, or don't go back? Or what's that-- I just think we're in a moment where everything feels so in flux and everything is happening with such change in the landscape that probably, it just feels like it behooves us to get used to the idea that there are certain things that we can ground ourselves in and those are the things that matter. Those are the things that are meaningful.

Our realities, our relationships with other people, the value that we bring, that we understand about what we bring, those are things that I think are more timeless and that we can anchor our understandings in. But the externalities are likely to keep changing. And I think we need a level of adaptability that's going to keep us healthy and flexible with those externalities changing around us.

We can still very much bring value in a world where more and more automation is happening in the workplace. It's not like we're not going to need human value. That seems like an incredibly unlikely outcome any time soon.

So I think that's the way I anticipate or I'd like to see people adopt this habituating to change is with a mindset of anchoring in what matters and in the things that we can really feel attached to, which are the human values and the human relationships that we have. Everything else I think we have to accept as, if not ethereal, then at least subject to a certain amount of evolution and revolution. And that is just going to be, I think, increasingly the case.

KIMBERLY NEVALA: That was really amazing. And I want to thank you, really, for making optimism I think not just acceptable but strategic. Adopting this stance might be one of the most concrete steps we can individually and collectively take to architecting the best future for all.

And - as I think I said to you off-mic earlier on - you've converted at least one previously self-professed realist into becoming a strategic optimist. So thank you.

KATE O'NEILL: Ah, I'm so glad. I'm so glad. Thank you so much. And thank you for having me on in this discussion. What a wonderful discussion. Thank you, Kimberly.

KIMBERLY NEVALA: Thank you, Kate. I don't think we couldn't have asked for a better note on which to end our second season. So if you've missed any of our stellar guests this season or last. Or want to revisit a new favorite such as Kate, now is the time. We'll be back soon with more ponderings on the nature of AI. So subscribe now so you don't miss it. Cheers.

[MUSIC PLAYING]