Pondering AI

Melissa Sariffodeen contends learning requires unlearning, ponders human-AI relationships, prioritizes outcomes over outputs, and values the disquiet of constructive critique. 

Melissa artfully illustrates barriers to innovation through the eyes of a child learning to code and a seasoned driver learning to not drive. Drawing on decades of experience teaching technical skills, she identifies why AI creates new challenges for upskilling. Kimberly and Melissa then debate viewing AI systems through the lens of tools vs. relationships. An avowed lifelong learner, Melissa believes prior learnings are sometimes detrimental to innovation. Melissa therefore advocates for unlearning as a key step in unlocking growth. She also proposes a new model for organizational learning and development. A pragmatic tech optimist, Melissa acknowledges the messy middle and reaffirms the importance of diversity and critically questioning our beliefs and habits.

Melissa Sariffodeen is the founder of the The Digital Potential Lab, co-founder and CEO of Canada Learning Code and a Professor at the Ivey Business School at Western University where she focuses on the management of information and communication technologies.

A transcript of this episode is here.

Creators & Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
Melissa Sariffodeen
CEO, The Digital Potential Lab

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.

Today I am so pleased to be joined by Melissa Sariffodeen. Melissa is an educator, an entrepreneur, and founder of the Digital Potential Lab. We're going to be talking about the promise (or the myth?) of digital transformation, why AI seems so inordinately disruptive and if it really is, and why unlearning might be the key for us as individuals and as organizations to transform productively with AI or other technologies. Welcome, Melissa.

MELISSA SARIFFODEEN: Thanks so much for having me. I'm excited to be here.

KIMBERLY NEVALA: Let's start with a little bit of your background. Maybe you can give us a quick synopsis and tell us a little bit about the work with the Digital Potential Lab.

MELISSA SARIFFODEEN: Yeah, it's great. So where do you start? (LAUGHING) Where do we start?

I've spent the last just over a decade teaching technical skills to hundreds of thousands of people across Canada, focusing on anything from an introduction to web making to AI, which we actually started teaching back in 2017 when it was niche. Now, we could never imagine where those skills have taken us and that knowledge has taken us.

The Digital Potential Lab is something I started the last year or so that really focuses on building, collaborating, and investing in technology, and more specifically, the people who build technology. My whole premise and what I want to do is help people build things that matter. And I think we need the skills that underlie that.

In addition to that work, I also teach leveraging information technology at the Ivey Business School, one of Canada's top business schools, and really have an opportunity to support undergraduate students that way as well.

KIMBERLY NEVALA: Now, we've been talking about digital transformation for a very long time. And you've been teaching technology skills and about the use of information and knowledge. But despite all of that, and despite the narratives to the contrary, this still seems to remain a really difficult task for most organizations. Why do you think that is?

MELISSA SARIFFODEEN: That's such a good question. I use this example when I speak or when I speak to individuals. I actually tell a personal story, where this was a bit of an aha for me that helps maybe to explain some of this.
I'll keep it really short. But I started doing this work under Canada Learning Code. It was really under Ladies Learning Code. We were teaching adult women technical skills because there weren't opportunities that were welcoming to us. And we started to run workshops. We really, early on, realized we needed to also support young girls. So we started running summer camps.

I remember it was one of our first summer camps back in 2013, where we had a group of girls ages 8 to 13 come together to learn an introduction to coding. But it was summer camp, so it was fun. We did crafts. We were outside. We did all the things you'd expect for a summer camp. And it was that first day, just before the girls started coding. They were in the classroom. And this young girl, 8 years old, comes crying out of the room, like, sobbing uncontrollably. And I run over. I'm wondering what's happening. And she says, I don't want to do coding. I'm not going to be good at it. My parents are going to be disappointed.

I remember thinking, wow, this 8-year-old girl at a fun summer camp already doesn't want to try this thing. She doesn't think she'll be good at it or has never done before. What is going to be in store for her future? Not to mention all of the adults that have been living their life for much longer. So at the end of the day, it's this mindset and the accumulation of beliefs, knowledge, and skills that we have about ourselves that creates this barrier or this resistance. We can get into it at an organizational level and individual level. But that story, for me, was the aha that put me on this question: why is it that some individuals thrive during periods of change and others don't? This young girl, Taylor, for me was that critical incident that got me on this path.

KIMBERLY NEVALA: So feel free to take this at the organizational or maybe at the individual level and we'll build it back up to organizations. As you think about that anecdote, although anecdotes should be funny and that's really kind of sad….

MELISSA SARIFFODEEN: Yeah, relevant and relatable, I guess.

KIMBERLY NEVALA: It is. What are the primary barriers or roadblocks - as individuals think about moving forward, learning something new, doing something different - that you have found to be most impactful or meaningful?

MELISSA SARIFFODEEN: Great question. For individuals and organizations it's really similar. There's this desire for us to be right, desire to please, desire to have control. We don't like uncertainty. We don't like ambiguity. I think we've all lived through the last couple of years, where we've seen that firsthand. So it's very similar.

Then the other parts or barriers that exist are, at least at an organizational level, the incentives that we add on top. So we really value outcomes and a particular goal as opposed to the process to get there. We sometimes, as much as we want to have strategic clarity, lack it. Or we lack the communication of that across teams.
So if any path will take you where you're going, you're going to take any path. And that ends up reinforcing this idea of doing things the way they've always been done. Which then reinforces your bias, your beliefs. There's no reason to question or do things differently. It’s all of these things, from human quirks to the incentives and the structures that we put in place, intentionally or not, that just keep reinforcing these deeply held beliefs, knowledge, and skills that we have about ourselves or our capabilities.

KIMBERLY NEVALA: So we hear everywhere that AI is fundamentally different. It's going to change things, it’s by definition disruptive. Therefore, we should expect all things in the way that we work, the way that we interact, the way that we think of ourselves as humans, to fundamentally change with the introduction of this tech.

So I suppose the question is, or maybe it's two, because I can't seem to hold myself to one. Is AI fundamentally different, in your opinion, your experience? And b, is the presence of a new tech enough to lead to a different outcome or change the path we're on?

MELISSA SARIFFODEEN: I'm not the technical expert. So I mean, there's probably a whole debate there. What I'm seeing, though, is that the pace of change we know is absolutely faster than it's ever been before. That's different. The types of individuals and roles that are being impacted are very different than they have in past digital advancements or transformations. Those two things are really important to underscore, at least in my line of work, which is the people side of technology advancement. I think those make it very, very different. There are also technical components. We can, I'm sure, get into that a little bit more, that do make AI specifically really different than the technologies we've seen before. Or at least the possibility and the potential than we've seen before.

I also wanted to jump on something you said around disruption because it is a bit of a buzzword. I like to think about a particular model of disruption. It's by the scholar Joshua Gans and builds on Clay Christensen's model of disruption. Some of the listeners here might be familiar. It's a very well talked about model. But it's this idea that disruption, and what they would say is supply-side disruption, really comes from within.

And that's the part that with AI is so different than maybe advancements in the past. In that it requires a very fundamental shift in the way that we put together the business strategy, processes, and the people than we've seen before. Yes, we can augment or, I think in your words refactor, and think about AI that way. But if we're really going to get at the root of the transformational power it has, it does require us to put the pieces together differently. That's the part that's disruptive. It's not necessarily that organizations can't see it or aren't working to make the shift. It's that it changes the entire architecture or the entire fabric of a way an organization runs and even thinks about its work. That is what's very different than maybe cloud or some of the technological advancements we've seen in the past.

KIMBERLY NEVALA: We say that. We certainly see that out there. It's not entirely clear to me at this point that that's necessarily how we're approaching this.
We can look at something like generative AI. And we talk about that as fundamentally transforming how humans work, giving you a digital assistant. So on the one hand, there's that: it sort of changes everything. And yet it's largely being applied to grease the skids. Or maybe to augment some of how we do existing tasks. But really, it is still in the context of an existing task, right?

You get these crazy conversations like, well, hey, we can still have all these meetings. We'll send our avatars! Which…why? Maybe you're not actually asking why you have the meeting. It seems nonsensical to me to send two avatars to have a meeting and churn out some notes.

Or we're looking at a process. We'll use the generative AI tool to maybe create an outline or to summarize something or to create an initial graphic. It doesn't though, or at least I haven't seen examples, where people are thinking about transformation in a more wholesale way.

So I suppose the question is - questions because, again, can't seem to hold myself to just one here. Have you seen an example of a wholesale transformation within an organization? Whether you have or have not, you mentioned thinking about strategy and process differently. What would that actually look like?

MELISSA SARIFFODEEN: I've used this analogy before of the way that we think about AI maybe as being icing on the cake. Your examples there, having your Otter take meeting minutes, notes, transcribe them, send them out, that's really the superficial icing on the cake.

Where I think where we can go is that we see AI as the flour baked withinside the cake. That idea that it is AI first and that AI is at the core of the way business operates, it markets, it funds, it supports, it hires. Against all of those pieces. That's where a lot of startups we maybe see in this space are doing that at the beginning.

In terms of enterprise or large organizations, I think there's parts of that happening. But we're not necessarily seeing that and I think it's for a lot of good reasons. There's risk. There's all these questions that we have about AI that we want to answer. And it's a little bit more challenging to do that when you're not at the early inception of an idea. So that is maybe, it's a reflection of where we are.

But that aside, there are mindsets or ways to think about, how do you take all these pieces of a business that are changing and put them back together a little bit differently? That's the crux of this disruption is that all the pieces are the same. They just need to be put together differently.

The one thing I think about and talk a lot about is even just the way that we approach a technology like AI and the language that we use. We often have talked about technologies of the past as tools. They are one directional. We as humans interact, give direction. And we can wield these tools in our path.
I have gone on the record so many times, including national TV, saying, technology is just a tool. We have to unlearn that. I think that technology is actually a partner.

That is a big shift to make: that it's not just this one-directional, passive thing anymore. It is a relationship. And we should respond to technology in the way we would with any relationship.

So first off, manage that relationship. Set expectations. Bring feedback. But I also think ask questions of it like we would in any relationship. Am I getting more than I'm giving or equal? What is it taking from me? Are we clear here?

And in organizations, how do we set accountabilities for AI, set targets, KPIs for AI? How do we build those relationships? That is also very different.

So when we just think about it as a tool, we will stay, I think, at this superficial level. How do I augment this? How do I automate this, in this one direction? Where I do think AI and specifically GenAI gives us an opportunity to open up a relationship. Where we can have that dialogue.

And then, without getting too much into it, some of the ethical considerations around this does make us then question and have that important conversation and dialogue we need around risks and governance and all those pieces that we wouldn't maybe in the same way if we just think of technology as a tool.

KIMBERLY NEVALA: It's an interesting conundrum in a lot of ways. Because as I hear you speaking about thinking about this, I also often say we need to think about it as a tool. Or as a system and ask the same questions that we would of any software system. Maybe not as a hammer, although I will say that I expect a hammer to work a certain way. If I hit certain things, appropriate things with it, the head stays on the hammer. And it does what it is supposed to do.

But the risk when -- So when I hear you talking about let's think about it as a relationship, where there's a give and take, it’s really as a means to open up a broader set of conversations and a broader set of thinking about requirements and risk. As opposed to, when someone hears that, they might be then making this jump to: you need to think of the AI system as an individual agent. With its own agency in the world that's also directing you. That, I think, could be rough. But maybe that's more from the user experience perspective and not what you were getting at. I struggle with that a little bit.

MELISSA SARIFFODEEN: Yeah, I think that's fair. I think that's fair not to think about it necessarily as another individual in a relationship, like, you know, a spouse or a friend. But to think about it having some of those characteristics I think is important. And also recognizing where we can co-create, collaborate, and augment.

For me, it's about opening up that dialogue as well. Recognizing that it has a role and a place in work and in whatever function it might be that we should be also expecting of it similar things. So we should be providing it feedback. We should be, as you would with any team member, we should be thinking about not just delegating but managing and being accountable and responsible for the outputs in that relationship as well.
So, yeah, again, I don't want people to walk away and think it's human and has a mind of its own necessarily. I think the relationship, if we just think about it as one directional and then don't put on it the same expectations that we might. Or we leave it as a tool and don't think about management of it, don't think of the responsibilities it has, I worry then we just miss the mark there. But, yeah, that's a great clarification.

KIMBERLY NEVALA: In an earlier episode we had the opportunity to talk with Marisa Tschopp who's a human-machine interaction researcher. She said the language of relationships is more conducive to this discussion than the language that we use when we think about tools. So I think you're spot on there.

MELISSA SARIFFODEEN: Yeah, exactly.

KIMBERLY NEVALA: It's still a little bit tricky. Maybe this also comes down to that prior point. Which is you have to think about it as something that you're getting something back from. It's not that different in some ways from other computer systems. Maybe it's a recommender system or something like that. You choose to select or follow its recommendation or not and then it adjusts based on the data.

But particularly with the language component now, it feels different. So perhaps the other component of that is that as designers or the folks who are architecting the fundamental systems, the foundational models, those bits, they need to ensure that they're not pretending this thing is human. The relationship language is more about the interaction model than it is on the design of the system itself. Because it's not going to self-correct, it's not going to do these things.

MELISSA SARIFFODEEN: Yeah. For those of us that are working, building, or living with AI and GenAI - let's use that an example – that, in the last couple of years has just changed dramatically. We know that AI has been around for a very, very long time and components of it. So has predictive analytics and some of the base of it.

But for those that are living, especially those that are living and working with it, it's just recognizing a bit of a shift there than we would have with a search engine typically or even a recommendation engine. There is more there. And so thinking about that mind shift. In organizations you've got such a diverse group of team members. People who are technical, who are building the technology. But the vast majority of us are just trying to figure out how this all makes sense and how we interact and live with it. So I think it's just a reframe.

Technically, it is a tool, I guess, if I were to look up the definition. For me, though, it's just that reframe to recognize that you can expect of it. And you also need to interact with it a little bit differently. Like your prior guest, the relationship and the language of a relationship as opposed to just a one-dimensional or one-directional or passive tool.

KIMBERLY NEVALA: It opens up the ability to have a better conversation, too, about - just as there's lots of good people in the world but they might not all be good for you - is this healthy or unhealthy? Is it helping us get to the objectives we want regardless of how impressive its capabilities may seem to be?

MELISSA SARIFFODEEN: Totally, totally. And also are we getting what we need from it? And is that enough? Especially as we think about the skills piece and the transformation that's happening, the augmentation of it and how it's going to support us. We should be asking questions of technology as well. Is it giving more than we're getting? Yeah, which is probably a whole other conversation.

KIMBERLY NEVALA: So you mentioned skills. There's a lot of conversation - again, let's put it out there and assume that people are rethinking the ingredients of the cake. Taking that step back and looking at business strategy and process more holistically. This conversation about reskilling or upskilling is then coming along with it.

You have put forward or hypothesized - I think "hypothesis" is maybe not a strong enough word - that we need to also rethink or transform the process of skill acquisition, of learning. And specifically what you mentioned a little bit earlier, unlearning. What is unlearning and why does it matter?

MELISSA SARIFFODEEN: So after this experience I had at summer camp, I dedicated the last decade to trying to answer this question. Why do some people in orgs thrive during periods of technological change and others don't?

I went to a few different places. To academia or the research on this. I went through experience, teaching hundreds of thousands of people through our programs. Working with teachers as an example, who absolutely have had to change the way that they do their job. And I started to see how some of our prior models of learning maybe aren't complete.

And I think the biggest thing for me and the biggest aha was actually, as I've shared before, is when I taught my dad how to use the adaptive cruise control in his car. Not an autonomous vehicle. It's funny. It's not an autonomous vehicle. We weren't even there yet. But it had a lot of those same components.
For me, watching him acquire or try to acquire the skills that were required in that and to make peace with this new technology, I saw within him this breakthrough. He was holding on to so many of those existing ideas and beliefs and skills he had of driving. It was so hard for him not to touch the wheel and then it would turn off. All those things, really simple.

So when I looked at the models of learning, we've got different, like T model. We've got learning curves. What I realized is that we have this basic belief, I think many of us, that knowledge builds. That what we learn, our skills and our knowledge, it accumulates. And we're always learning new things. This notion of lifelong learning or learning organizations, we've been talking about that for decades as well.

But there are some changes to technology and there are certainly times, like right now, where I think that this notion that knowledge always accumulates and it never depreciates is missing the mark. Because there are times where we need to let go of a way we've thought about something or a particular skill to make new for this disruptive technology. Or the circumstances around that knowledge or the skill have changed so much.

So when we think about technology skills, we know that the pace of those are changing very quickly. We also know that the half-life of these skills are shortening. For most technology roles, it's like two and a half years where half the role skills and knowledge is obsolete.

So how do we make space for that is this idea of unlearning. Unlearning in organizations, unlearning as individuals. It’s really trying to release those things that no longer serve you, those mindsets, that knowledge, the skill to make way for new. Because I do believe that the circumstances around GenAI specifically have changed so much that it requires us to unlearn and make space for thinking about the world and acting in a different way.

KIMBERLY NEVALA: Now, to be clear, you're not talking about folks necessarily having to reinvent themselves every two years or that we shouldn't develop foundational expertise, right? Just basic skills that build over time or that there is no such thing as expertise in the future. So is there an example? Because you you've done technical training for a long time and that might highlight this a bit more. Or even a non-technical example, frankly.

MELISSA SARIFFODEEN: Yeah, I have to be clear that I'm still a big believer in lifelong learning. I don't think we ever stop learning. So this idea of unlearning is not like it's the only thing. It's really just an extension of learning.

So if you can think about a learning curve, where you, at the beginning of learning something new, like a technology skill, you don't know anything. But over time, you learn more and more. What I'm suggesting is that we should also build in, in the acquisition of knowledge and skills, opportunities to question those best practices or that knowledge and those skills. That is the unlearning piece.

I have some suggestions within skilling, within organizations, that are worth unlearning. Some of the shifts or reframes that are worth doing. For me, actually, I see this all the time in teachers. I know I mentioned it a couple of minutes ago, that a lot of teachers will come to our programs. And they do that because they now have to teach with and about technology like AI.

And many of them would come to our experiences open minded to wanting to learn. Absolutely wanting to meet the students where their needs are. But were holding on to this idea of being the sage on the stage or having all of the right answers. This is the perfect example of this. Whereas there was another group of teachers that were like, mm, I get it. They were willing to release that and were willing to see themselves as a learner in the classroom and were able to make such a foundational shift in the way that they thought about introducing these skills to students. So that's an example of everything you've learned in this career 20, 30 years in, you need to now unlearn and relearn, in this idea of lifelong learning, a new way of approaching the classroom.
So that's an example I give. But there are, in organizational settings and particular technical skills, there's lots of other ways or things we want to question that may not serve us into the future.

KIMBERLY NEVALA: So when you look from the perspective of an organization or a corporation looking to acquire, extend, and leverage things like AI, are there two or three top shifts that you think need to be made that are fairly consistent in your experience?

MELISSA SARIFFODEEN: Things worth unlearning? Yeah, definitely.

I think there's one, which is just the way that we approach learning in organizations. Often, you have a learning function in an organization. And it's separate from the business function it might support. But when technology skills are changing so rapidly, I really believe the ownership of that learning should happen within the business function. Then the learning function can support with maybe how to best help people learn but not what to learn. So that's unlearning the structures of how we've thought about training as one really great example.

Really broadly, just having a perspective of challenging best practices across the board is really important. When we look at organizations that have made digital transformation shifts and those that haven't, we look at GE and Siemens are often great examples we think about in the last decade or so, those that have successfully made it. Siemens really was willing to, again, challenge best practices, build this culture that was willing to fail forward. And so that's another really important way. A place to unlearn is to really think about prioritizing and valuing doing things differently. How do you come up with the structures and incentives to do that?

And then really tactically, how we talk about learning also. It's often the number of workshops delivered. How many trainings have your staff gone through? The number of hours in courses. We need to move to the outcomes. Not just the outputs but the outcomes. So unlearning the way that we talk about it.
Talk about enabling the workforce, the future, as opposed to training.

All these small, little, really tactical pieces, are some of the recommendations I would have. Everything from broad, question best practices, to rethinking about the roles and responsibilities of your training function, to just the way we talk about learning, can have a big shift in the big shift that organizations are going through right now.

KIMBERLY NEVALA: We've talked in the past about having a lot of risk management in organizations. And the idea there is to minimize, if not to avoid, risk in all things including some of our transformative efforts. Is that focus on risk holding us back in some places as well?

MELISSA SARIFFODEEN: Yeah, it reinforces the status quo in a lot of ways. And there's an element of pace. I don't want to knock risk management or mitigation. I think it's super important. We need that, those diverse perspectives.

But this is also something we need: more diverse perspectives. So questioning those components.
Bringing people together or reducing silos, I know, is the thing we all want to do in organizations. And some organizations are structured intentionally to create silos if we think of certain industries. But bringing those people together is one really great way to help us unlearn, challenge things.

So I think it (risk management) plays an important role. But does it reinforce the things that we've always done or the ways we've always done it? Yes, absolutely. We see that.

But we see organizations that are able to counter that. We've seen organizations that have done things that would be considered radical. You think about IBM a decade or so ago that trained their workforce on design thinking. Nobody thought they could do that. But they just did it.

So the other thing, when we study disruption - specifically the supply side or architectural disruption, in which the way that you run your business, the way that you manufacture or offer your services is changing so drastically - some of the really important things for organizations to do is to establish a really strong culture. This is, again, from research of scholar Joshua Gans. It's really ensuring that there is a strong identity, strong culture. There's a focus on failing forward. So risk is important. But there's ways that we can continue to move forward and not let that hamper progress.

KIMBERLY NEVALA: I suppose this hearkens back to what you said at the top about what is it that we are focused on? What are we measuring? Are we looking at the ticks and tacks of the process? Are we looking at – I liked this - outputs versus outcomes. Although it strikes me again that we may, as decision-makers, as leaders, even as individuals, have to take a really large step back to think about what is the actual outcome? Because thinking back to some of the projects I've done in my consulting days, I could think of a few examples where the clients were focused on what we called outcomes but were really just outputs.

MELISSA SARIFFODEEN: Outputs. It's so much easier to focus on the outputs, right? It's so much easier to measure the outputs as well, especially when… it's hard. Any time, it's hard to see into the future.
But especially right now, with the rapid pace of change and so many leaders and so many employees, it's hard to picture what comes next. It's hard to even think through that.

So, yeah, I can appreciate and understand why we do that. I think it's our responsibility, if we really want to meaningfully leverage the power of these technologies and help people make the shift, pushing through that. Working through that is going to be really important.

KIMBERLY NEVALA: So are there specific recommendations or things you would tell leaders about how they can help raise the organization's gaze, if you will, the competency in this regard at the same time as they do still need to operate the business as it is?

MELISSA SARIFFODEEN: Yeah, absolutely. The strategic clarity piece is the most important. I think that, again, we probably could have this conversation on a lot of different topics. Understanding that is going to be really key.

Then the second thing and again, my bias here is around people and organizations and the skills folks, but really engaging teams along the way. That's the other thing we've unlearned. We've typically thought about workforce development maybe even top down. Where there are organizations, like a large automotive manufacturer making a shift from a diesel engine to electric, who said you aren't going to be needed anymore. But that doesn't mean we're not going to support you to find what you're going to do next.

This is an opportunity, no matter how large your organization or how small, to engage the people that are being impacted and helping to come up with those solutions as well. I know there's some logistics challenges to that. But even at an individual team level, letting that cascade up, there's an enormous amount of opportunity there. These are people. Organizations are people. And these skills and the transformation of these technologies are impacting people most. So engaging them in the solutions is going to be really important as well.

KIMBERLY NEVALA: I was asked a question recently that really struck me. And it was phrased in this way: as AI takes over more jobs and functions, do managers need to become more skilled and creative in managing basically the machine? I have a gut-level negative reaction to that.

But my response really is, no. The task of managers moving forward and leaders is not to manage the AI. Our task is to value the human resource and the human skill and the human ingenuity and to figure out where we can apply those to get value and drive the next innovation with the human labor.

Now, as Simon Johnson reminded me - reminds all of us - recently, that that is not the commercial logic right now. But I do think that is the challenge if we want this AI-driven transformation to result in a net better world and one that really does value humans. Is that also…I mean, what's your reaction to that? Is that a mind shift we have to have as well?

MELISSA SARIFFODEEN: Yeah, I think I would have the same reaction to you initially, which is like, ooh. We aren't going anywhere. And maybe there's a bit of an oversell, in some ways, of this doom-and-gloom existential crisis, with the robots taking over jobs. I feel like we've been talking about that for a very, very, very long time.

But I do think the people element is so important, helping people navigate that shift. And ideally, if we think about technology also in this relationship, as augmentation in this co-creation, it will allow us to focus on doing the things that we want to be doing in more meaningful ways. Is there this messy middle that I think we're in? Absolutely. And it's us moving through.

But if we don't keep an eye on that, if we don't think about unlearning and recognizing that we need to think bottom up as much as top down, especially with workforce training and development skills training, I do think we're going to miss the opportunity there. People are core to this and engaging them meaningfully in the future.

It's going to be great. I think we'll get through the messy middle and it will be great. We just need to support, as leaders and as organizations, how are we doing that? How do we need to rethink some of the ways our organizations have set up ourselves to reinforce the systems that aren't going to allow us to make those shifts? Just what is that in your day to day that's worth rethinking?

KIMBERLY NEVALA: Is there a question or a dimension to this conversation that folks like myself are just not asking? Or you're like, please, please, just ask that question or raise this point for discussion?

MELISSA SARIFFODEEN: That's such a good, yeah, that's a good question. I mean, I think for me, the one thing that I like to take away or get people to think about - and maybe this is what you're asking, or I'm going to at least offer – is yes, I've talked about how technology, AI, is different. It's disruptive, how maybe it's flour in the cake and not icing on the cake.

But fundamentally, the future, I believe, is made up of all the same things that the past has been made up of. It's people. It's business strategies. It's processes. It is technologies. It's just that we're putting them together, or we need to put them together, a little bit differently. So it can often feel a little daunting, especially as we look and we can't see that far over the horizon. I believe it's all the same things. It's just trying to figure out what that new formula is going to be for the future.

And I'm really excited about that. Even as fast as technology is changing, we are just at the very beginning. So there's so much possibility and opportunity there. We just need to be diligent and intentional about opening up those pieces and being willing to put them together differently. And if we do that in small and in big ways, I feel like we're just getting started. That's my big takeaway, yeah.

KIMBERLY NEVALA: A practical, technical optimist .

MELISSA SARIFFODEEN: Yes, fair.

KIMBERLY NEVALA: I've started saying the only true optimists are actually skeptics. Because they're working really hard to figure it out exactly because they want it to work. So they're spending a lot more time instead of just saying, it'll all work itself out eventually, it's fine, of course it's going to do well. But, back to that point you made earlier, they're putting in the work to figure out how this will all work..

MELISSA SARIFFODEEN: Totally. Got to solve this, solve this, solve this.

KIMBERLY NEVALA: Alright, so any final thoughts or words of wisdom? Or here's the top one or two things we as leaders, as decision-makers, as folks in organizations being asked to adopt and adapt to these technologies, should take away?

MELISSA SARIFFODEEN: As much as I am a techno optimist to all of this, I do think the next step is to be critical, to think critically, to challenge those practices. And it can be as big or as small. Bringing people to the table, bringing people impacted, but bringing diverse perspectives. These are the little actions that you can take.

There are big ones around how you structure learning in an organization. But if you're not there yet, I think progress is an achievement. So just to question that. Is this perpetuating the way that we've always done things? Even just recognizing that tendency for organizations and change to just reinforce the way things have always been done, I feel like you're already in a mindset to move in the right direction.

So that would be my takeaway or my ask to folks that are listening is to just think about that. Think critically about all of the things that you're doing in your business or in your personal life. What is something that's worth challenging and unlearning as you go forward?

KIMBERLY NEVALA: Well, thank you so much, Melissa. This has been awesome. I love this idea of learning to unlearn and not just because the words trip off the tongue in such a nice little way. But it's something, since we've met, I've thought about even in my own life. Because it's a lot of those things you just start to do instinctively and habitually and don't think about critically in getting rid of what's not working and what's not serving you anymore.

I also really appreciate the reframing of that conversation around thinking about the technology and using the language of relationships. That clarification that it doesn't mean that we should think of it as equal or like, but that it’s the language that allows us to open that conversation. So again, really enjoyed the insights and I appreciate the time. Hopefully, we'll get you back here in the near future to talk some more about how we're doing.

MELISSA SARIFFODEEN: I'd love that. Thanks so much for having me. It was great.

KIMBERLY NEVALA: To continue learning from thinkers and doers such as Melissa about the real impact of AI on our human experience, subscribe now.