Pondering AI

Sheryl Cababa reflects on Systems Thinking in AI design.

In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.

To learn more, check out Sheryl’s book Closing the Loop: Systems Thinking for Designers

Creators & Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
Sheryl Cababa
Chief Design Officer - Substantial

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: It's day six here at Pondering AI's Insights and Intuition series. Today, Sheryl Cababa reflects on systems thinking in AI. Hello, Sheryl. Thank you for joining us again.

SHERYL CABABA: Yeah, for sure. Glad to be here.

KIMBERLY NEVALA: In the time since we last spoke, your fantastic book has come out. I'm interested in the response to that and your perspective on if some of the very incredible developments we've seen this year in AI have resulted in a heightened awareness of the need for systems thinking in technical design.

SHERYL CABABA: Absolutely. For those who don't know, I wrote the book Closing the Loop: Systems Thinking for Designers. It is oriented around the combination of design thinking and systems thinking for navigating problems. It's really interesting because I do think the pandemic, first and foremost, heightened people's awareness around systems thinking.

And now thinking about the new technologies and the shifts, especially in generative AI, has people thinking about it again. Because how do we take this thing that's kind of a solution and understand what problems it's meant to solve? We're in that in-between space right now where a lot of people don't really know. Systems thinking is a good way to think about these kinds of new potential solutions and how they might intersect with all of the things we're trying to navigate.

KIMBERLY NEVALA: Are there specific examples that you think could have been better addressed or even avoided with a systems-thinking approach?

SHERYL CABABA: I don't know about avoided. It's interesting because I work primarily in education. There's this really great book called Failure to Disrupt, I think the author's name is Justin Reich. He talks about how emerging technologies are oftentimes inserted into education and they fail to disrupt because of this gnarly, complex system that this new technology is entering.

Part of it is because oftentimes these new technologies are designed for just the point of use. It's a single-point solution without thinking about all of the repercussions. So one thing he points to is massive online courses and how that was supposed to disrupt everything. Now we're looking at something like generative AI and we're seeing how it's being used in education both intentionally and unintentionally.

One of the unintentional aspects is that a lot of post-secondary instructors as well as, I guess, secondary instructors are pointing to how it's being used to cheat. People are using it to write their essays. And a lot of educators are wondering how to navigate this. For me, that's one of those things where it's like, OK. We need to step back and think about how this is framed.

Is there a way that we need to reinvent how knowledge is being tested? If the way to test knowledge is to have students write and you don't know if it's AI writing it or not, then we might actually have to do the work to consider, well, how do we ensure that people are learning? Because that way of testing them isn't going to work anymore. And there's no AI detector that is going to help you because they don't really work. Or they work for a minute and then chatGPT outsmarts it.

So that's not a sustainable way of thinking about how to solve that problem. We have to go back to think about what is the purpose of this? What are the outcomes that we're seeking? How do we mitigate that sort of use by thinking about other ways to test knowledge? So that's an example of it being used in a way that's unintended.

How do we then account for the different incentives in place? What are students incentivized by? They're incentivized by the grades that they get by writing these essays and things like that. What are teachers incentivized by? Their students performing well. So there might be some other means of ensuring that that happens by using AI in a different way. Who knows? I don't know all the answers, but these are the kinds of things I've been thinking about lately.

KIMBERLY NEVALA: So we've seen hyperbolic narratives such as : we no longer need to teach people to write. People don't need to be able to do this because chatGPT and its ilk - large language models - will be able to write for you.

It seems to miss the point a little bit about what it is we actually get from writing and what is the point of learning to write. Which is intimately linked in a lot of ways to how we think and how we acquire knowledge. Even if the essay itself as a verification mechanism is no longer valid, that doesn't mean that writing is no longer valid.

So are these some of the dimensions that if we take a more design-thinking approach, we would be looking at all instead of just the one?

SHERYL CABABA: Yes, absolutely. Do you ever read a LinkedIn post or something, and you're just looking at it? And you're like this is giving AI (vibes). I think we're really good BS detector-- there's something uncanny valley about it. Where you're like, I don't know, there's something about this that's off.

I think now we're at a point where, I don't know about you, but I'm always kind of questioning stuff. When people are writing these quick hot-take posts and things like that. It's oh, this seems a really pat kind of perspective. That doesn't account for, yeah, it'll keep getting better and better at mimicking us.
But I feel like there's very little originality in terms of what it actually produces. So I don't think thinking will be obsolete. Even if the writing of things we don't like to write anyway, we can alleviate ourselves of that. Writing a really boring logistics email: yes, please. I'm happy to hand that off to a chatbot, right? Just go for it, it is fine.

But when we're doing deep critical thinking, people are maybe too quick to say we don't need to do writing anymore. Because there's some sort of critical thinking lens. This is what I mean by we need to test our knowledge differently. Because maybe if you have students, you're not really understanding if the students understand things conceptually by just having them turn in an essay. It might lead to more dialogue-based interactions with students to see if they really get it.

So, it is interesting because you can parallel it to basically when the computer first came out. I heard a lawyer on the radio and she's been in the business for 50 years or something. Somebody was asking her, are you worried about whether people like yourself are going to have a job? And she's like, I was around when computers first came out and they said we're not going to have jobs anymore. We keep creating jobs for ourselves. So we're going to have jobs no matter what. We're going to find things to do. So I'm not even thinking about that.

And I think that's generally true. We'll find ways to keep ourselves busy and productive. We are in a capitalist society. We're not going to alleviate ourselves of that. But I do think it's going to mean a shift. And it might mean a shift in where our focus is.

KIMBERLY NEVALA: So as we move forward -we're on the precipice of 2024 here - what do you anticipate coming around the bend as the new year comes into play?

SHERYL CABABA: A lot of organizations are thinking about where does generative factor into my organization and how we do our work? Where does it factor into how we serve customers? Where does it factor into how we change curricula for students? And I don't know the answer to that.

But oftentimes, there's a jump in organizational decision making to, oh, how can this help us be more efficient. Without really thinking through a lot of, for example, the trust issues that are oriented around this fear that people have of being replaced by robots. Even if, as I said earlier, you really think about it, it's probably not the case for the most part. But people genuinely are scared for their jobs. They're scared for this uncertainty. And it's up to organizations to basically take a lens that is oriented around organizational change before even thinking about how these technologies can help them.

And then, once you have an idea of the changes you want in your organization or the way you want to serve your end users or end beneficiaries. Then how do you think about the limitations of this new technology? What is it limited to, rather than what is it capable of? I credit that to Ovetta Sampson, who is a director at Google focused on AI. Because oftentimes what we're thinking about is, oh, my gosh, this thing, it can do so much. Rather than thinking about what is the baseline of what it can do? What is it limited to? So we can design for that rather than designing for the entirety of its capabilities because they are really limited. Even Sam Altman says the more you use these technologies, the more you realize how bad they are. Which is funny coming from somebody who's been such a proponent for generative AI.

KIMBERLY NEVALA: So as folks look forward, and I think understandably grab that end of the stick - which is, oh my gosh, gen AI is here or large language models - I have to be using them to do things better and faster or to make better decisions. How would you be advising folks to approach that? What should they be focusing on to better orient their thinking?

SHERYL CABABA: As a systems thinker, I really orient around what are your various stakeholders' incentives? What are they incentivized by in terms of their jobs, what they're trying to get done, how they get rewarded, how you get rewarded. Really consider that before implementing these wide-ranging impactful things. Because if you don't know what people are incentivized by and if you don't know what their barriers and challenges are, you might just be throwing a solution in there that might either have unintended consequences or might not be used in the way that you want. Or maybe not be used at all.

It's funny because I forget that people think this way: that there's a new technology and oh, my gosh, it's going to change the world. Yet a lot of solutions end up being like the Segway, which is a point solution that was looking for a problem.

So fall in love with your problem space. What are you actually trying to solve for? Rather than we have this thing, can we use it for? That oftentimes results in a mix of effectiveness because you should be starting with what are the things that could be made better?

Also, is efficiency really what we want? If you think about something like Slack. It was going to make your job more efficient because you don't have to spend as much time in email. You know where I spend most of my day? I spend most of my day on Slack. [LAUGHS]

So this happens over and over again to us. We need to think about what are the other motivations aside from efficiency because we probably will not become more efficient. So we might as well get to the heart of what it is we really need.

KIMBERLY NEVALA: Wise words indeed. Tough words, but wise words. Sheryl, thank you so much.

SHERYL CABABA: Definitely. I'm still navigating this too. I think it's the biggest technological change in probably a couple of decades. So we are going to be navigating this for the next few years.

KIMBERLY NEVALA: 12 days of Pondering AI continues tomorrow. Subscribe now for more insights into what's happening now and what to expect next in the ever-fascinating world of AI.