Pondering AI

Ganes Kesari confronts AI hype and calls for balance, reskilling, data literacy, decision intelligence and data storytelling to adopt AI productively.                     

Ganes reveals the reality of AI and analytics adoption in the enterprise today. Highlighting extreme divides in understanding and expectations, Ganes provides a grounded point of view on delivering sustained business value. 

Cautioning against a technocentric approach, Ganes discusses the role of data literacy and data translators in enabling AI adoption. Discussing common barriers to change, Kimberly and Ganes discuss growing resistance from technologists, not just end users. Ganes muses about the impact of AI on creative tasks and his own experiences with generative AI. Ganes also underscores the need to address workforce reskilling yet remains optimistic about the future of human endeavor. While discussing the need for improved decision-making, Ganes identifies decision intelligence as a critical new business competency. Finally, Ganes strongly advocates for taking a business-first approach and using data storytelling as part of the responsible AI and analytics toolkit.  

Ganes Kesari is the co-founder and Chief Decision Scientist for Gramener and Innovation Titan. 

A transcript of this episode is here

Creators & Guests

Kimberly Nevala
Strategic advisor at SAS
Ganes Kesari
Founder, Innovation Titan

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. I want to thank you for joining us as we continue to ponder the realities of AI with a diverse group of innovators, advocates, and doers.

Today, I'm so pleased to be joined by Ganes Kesari. As the founder, CEO, and chief data scientist for Gramener and now Innovation Titan, Ganes helps organizations adopt analytics in AI mindfully. We will be differentiating fact from fiction in AI with a specific focus on the state of adoption and readiness in the enterprise today. Welcome Ganes.

GANES KESARI: Thanks, Kimberly. Thanks for having me.

KIMBERLY NEVALA: All right. So we're going to start, as we tend to do, at the beginning. Can you share a brief overview of your career as a tech entrepreneur and what drew you initially into the wide and sometimes wild world of data science?

GANES KESARI: [CHUCKLES] So I have about 20 years of experience in solving organizational challenges with technology. The early part of my career was using other aspects of technology to solve challenges. And then around 2010-11 is when I stumbled upon the world of data analytics.

At that time, it was a little early on, because we had to do a lot of education on what we really mean by data visualization, what is machine learning for the enterprise audience. That's when we co-founded the firm Gramener with some of my friends I used to work earlier with. And over the last 10 years, as you mentioned, it's been a wild ride.

[CHUCKLES] It's been great because the industry started noticing data analytics, saw the potential, and has been adopting it increasingly. And through this period about my career path, I have played a variety of roles. Setting up our data analytics practice, setting up our information design Centre of Excellence, starting our AI and story labs for innovation. Currently I lead our advisory and consulting practice. So a little bit of different areas, a little bit of everything. But all in the service of transforming decisions with data, storytelling, analytics, and change management.

Apart from my stint at Gramener and now Innovation Titan, I am passionate about teaching and writing. I have spoken at various forums, TEDx and other industry events. I currently teach at NJIT business school, where I teach business analytics as an adjunct professor. And I also write for Forbes and Entrepreneur on AI. So that's me, Kimberly.

KIMBERLY NEVALA: Excellent. Who better to talk about all of the stories that are swirling around AI then? So there's no paucity of narratives - many interesting narratives - that are developing today around artificial intelligence, or AI. What are you finding most intriguing? And are there any that you think are perhaps a bit insidious in the sense of, they're enticing but may ultimately be harmful?

GANES KESARI: Yeah. I think we are seeing huge hype today. That's unmissable. If the last five years I was excited about AI, today I'm a little worried, too, because a lot of other people are too excited.

If I look at the typical Gartner hype cycle, I'm sure many of the audience would be familiar. So there are different stages that innovations typically go through, right? So you have the initial research or the phase where innovation trigger happens through research. And then over a period of time, there is some growth. And then it hits a peak, which is where the hype is usually high - more than it is warranted. And then reality sets in. And then people get disillusioned. And that's what Gartner calls the trough of disillusionment. And people don't value it. And they discount it even beyond the value that it is clearly delivering. And then, over time, there are people who come back to it. And then there is a slope of enlightenment. And then, eventually, there is a plateau of productivity, where there is-- it's almost a given. And people start building on top of that.

So if you look at that hype cycle today with AI, and particularly generative AI, we are at the peak of hype. Where clearly a lot of the attention is unwarranted. There is potential, no doubt. But a lot of the attention is unwarranted. Which also means that we are going to see disillusionment in this space very soon. We should be prepared for it and for the subsequent recovery.

KIMBERLY NEVALA: I've taken to saying that, in a lot of cases, I think AI at large today is both overhyped and undervalued. In that, when we're talking about things like the GPTs of the world, ChatGPT or whatever it is, there's a sense that it's going to do anything and everything. Which makes me very concerned that people don't really understand how it works. Or the actual effort to make that work, particularly in a business or social context. And on the other side, it's undervalued because we are so enamored with the bright and shiny we are missing the opportunity to deploy it in possibly what might seem like mundane and boring ways that drive really true value and efficiency.

GANES KESARI: Yeah. That's so very well put. And if you come to think of it, this is not the first time that we have seen several technology innovations in our lifetimes. If we look at one of the most popular ones, which all of us would remember is the dotcom boom-bust cycle. We went through almost something very similar, right? If you look at the timeline, '70s-'80s is when the research phase happened. And the early 90s was the growth phase. Where people started seeing, OK, the internet can do something. It's beyond government, beyond educational institutions. And there were companies which were starting, slowly coming in, and built around it. And late 90s is where the madness set in: the hype phase. There was an internet gold rush. And 2000, after the dotcom bust, there was the despair phase, or the trough of disillusionment. Then late 2000 onwards or 2001 onwards, there was a recovery phase. Now we almost consider it a given. And there have been a lot of fascinating innovations on top of internet today. And that has become a very strong backbone.
So similar to that, I think ChatGPT - generative AI broadly - I think that has a lot of potential and like you mentioned, undervalued. We will understand that even better. And then perhaps some of this, the hype, will go away.

KIMBERLY NEVALA: Mm-hmm. And do you think we're at risk right now because there is so much focus on that particular technology, that particular instantiation, that folks just lose track of all the other ways that we could deploy things that are under the AI umbrella? Machine learning, deep learning, et cetera, et cetera?

GANES KESARI: Oh, yeah. Yeah. That's true. So there are so many apart from ChatGPT: there are so many GPTs or chat-something-else. There is a lot more potential with other types of analytics, with other types of AI. Suddenly, people have stopped talking as much about computer vision. Until a few years back, I think we saw this in the news - processing videos and pictures. While work is happening, but people are obsessed with this (GPT).

And the bigger challenge I see is people are forgetting the value of simpler descriptive diagnostic analytics. Which still even today, I see that it delivers almost a majority of the value in enterprises if you look at it roughly. We can't put an exact percentage, but roughly a majority - what, 60-70% - still comes from simple descriptive diagnostic analytics. Only a minority comes from advanced analytics. It has its own place.

But are we forgetting the majority because of the recent hype? That's a question we need to think about.

KIMBERLY NEVALA: And how would you respond if someone says, yeah, that may be true, Ganes. But that's really because organizations are just behind. They're not embracing the new and it's the nature of bureaucracies. We need new business models and new approaches. And that's why those older approaches or those non-prescriptive/-predictive types of applications are not seeing the light of day in the way that the marketing or the media might indicate they are?

GANES KESARI: Yeah, I think there are two extremes, right?

I've seen organizations that have been doing machine learning for a while. Even there, if you look at the need for simple BI, or business intelligence, reporting, they get a lot of value from that even today. Despite running a lot of these machine learning projects. So that's where I see even the forward-looking organizations despite having embraced advanced analytics, they still see value.

And the other part I think your question is also hinting at: are all organizations able to look at AI and embrace advanced analytics, which is a challenge as well? There are clearly those low-hanging fruits which advanced analytics can help solve with unstructured data that many organizations have not tapped into.

A simple example is text analytics. There's so much of value that you can get by looking at your customer conversations - the chat histories and so on - of customer feedback. But there are one class of organization still looking at those numerical data and ignoring all of that easily accessible, low-hanging fruit. So we need to work on both ends.

KIMBERLY NEVALA: And how much in your estimation, in the work you've done both broadly and with organizations, of what people are calling "AI" today is really AI? And does having precise definitions of this matter when we're talking about when and where to deploy these technologies?

GANES KESARI: So the moment people use "AI," I look at the audience. And then I kind of tune into their definition of "AI." I don't ask anymore. In the industry, if we look at articles, we look at general conversations when people say "AI," today, the filter I apply is analytics. It could be simple analysis on Excel. They mean that complete umbrella. So that's what they really mean by "AI."

Whereas someone who is really in the analytic space - for example, if it's a chief analytics officer. If they are talking about "AI," then perhaps they really mean machine learning, or deep learning, or the advanced analytics in the truest sense. So that's what they mean by "AI."

So we need to tune into the audience and what they really mean by it. Otherwise, it's used very often today as an umbrella term.

KIMBERLY NEVALA: OK. And in your experience also then, how accurate - whether we're talking about public or private organizations - how accurate are leaders' perceptions of AI? However they might use the term. And are their expectations for the level of effort and the amount of return they can get and how fast that can happen realistic?

GANES KESARI: I see leaders at two extremes here.


GANES KESARI: Let's talk about the data and expectations on what data is needed. There are leaders who assume that AI can do everything and it can even create its own data. So [CHUCKLING] that's one side: you don't need data. AI can take care of everything.

KIMBERLY NEVALA: Synthetic everything.

GANES KESARI: [LAUGHS] No, not even talking about synthetic. Just like you can magically create data. AI is, like, omniscient. It can do anything. So that blind faith. So that's one thing, one extreme.

The other extreme is where leaders think that they understand that there is a good volume of data needed to train algorithms. And they wait too long. They wait for too much data to be collected. Or maybe a data warehouse or a full data lake to be created. They don't start analytics or advanced analytics until then.

The expectations, they are at either extreme, so I think it needs education.

Same thing if I take one more example of extremes: ROI. There are those leaders who expect AI to deliver, like, tomorrow. The moment you set it up, it should transform decisions, and it should be able to keep up pace with the audience. That's a little unrealistic in terms of ROI in a very, very short term.
At the same time, the other extreme is that there are leaders who don't demand enough. Who don't set and demand from the teams that they've invested a few million dollars in. There has to be a measurement of the value generated by the initiative. There has to be some tracking and target set. If you don't have that, that, again, could lead to disillusionment.

So I see that - whether it is with data, ROI, or in terms of how you execute AI engagements - people are at the extremes. And bringing them to the center calls for education. Otherwise, they will burn their fingers. They'll do multiple projects, see failures. And only then, they'll come, which can be avoided with education.

KIMBERLY NEVALA: And when you say "education," what are we really talking about? Are we talking about basic numerical literacy? Statistical literacy? I know that that's not the full gamut. So when you say "education," what does that look like? And for organizations vs. individuals what are the different aspects we need to be addressing?

GANES KESARI: Yeah. I'm talking about data literacy or data fluency.

The ability to understand how data can help an organization. The ability to communicate with data just like language literacy. The ability to read, communicate with that freely across different stakeholders within an organization. That level of understanding - that could be reading tables, could be reading basic charts. And then you have some insight, being able to interpret simple analytics insights, if you're able to interpret that. And to talk about the business implications of that. So that is what I mean.

And from a leadership standpoint – the people who are making the decisions on funding projects - understanding what is the potential. How do you go about picking those areas where you implement AI or, for that matter, any of the data analytics technology. So education on those aspects. That this is the potential possibilities and limitations of what we can get from AI. And what is the time frame? What kind of skills we need, the team needed to deliver that? And what interventions are needed from an end user standpoint

So understanding of all of this. This doesn't call for knowing models. It doesn't call for learning about design. But application of technology in an enterprise context-- that's what we mean by data literacy, ability to communicate with that.

KIMBERLY NEVALA: Mm-hmm. And what skills or roles - put that in whatever frame you'd like - are most often overlooked or under-emphasized in organizations that are looking to extend, expand, or even start their data science and AI programs?

GANES KESARI: I think the role of data translator is still under-emphasized.

Organizations and people have been talking about it for a while. But still, I see in my experience consulting many organizations still hire only for data scientists, data engineers, or information designers.
So these are the roles people typically hire for.

But the data translator, for those who are not familiar, is the role that helps translate the business problem into the right questions. And then translate those business questions into data questions, come up with an approach using data analytics to solve that, and then translate the solutions back into a format that users can understand and act upon. Which also involves change management, adoption, driving data literacy - which we just spoke about.

So the data translator does the translation back and forth, is a strong business advocate, and typically comes from the business of the user side of things. So this role I still see is under-emphasized. I think three or four years back, McKinsey published a report where they said data translators is going to be the hottest role of the next decade. So people talked about it for a while, but now I think there are some organizations who have been hiring for it, but many still ignore that role.

KIMBERLY NEVALA: Is this an evolution of what we used to call the business analyst or a data analyst role? How do you differentiate those? Or do you?

GANES KESARI: Yeah. So it's an evolution of that. In a typical technology space, a business analyst would play a very similar role. Whereas, in the data and analytics space, this also calls for understanding of these additional disciplines. Which is data analytics, information design, and also a bit of change management. Because this technology, there is so much of resistance compared to technology projects today that there is so much of change management involved. So all of these aspects additionally get added to our data translator skill set in addition to the business analysis.

KIMBERLY NEVALA: Mm-hmm. Now, you mentioned there's a lot of resistance. And going back to this idea of narratives and public narratives that are being driven-- and we just spoke to Dr. Christina Colclough as well about this idea that analytics and AI are just going to make your world better, right?
You're not going to have to do all the work that you don't want to do or this work that is or perhaps someone else may view as drudgery. And there is this bit of a narrative around you should be embracing this. Because, yes, it might make some aspect of your job obsolete, but isn't that great, because you didn't like doing it anyway…right?

KIMBERLY NEVALA: Is that an aspect of it? Back in the day, you and I met years and years ago, I think, at a data conference. And we used to laugh about this idea where we were talking about data management. And the analysts creating the analytic models - and back then we weren't even necessarily talking about AI at all - would say, I really hate all the time I have to spend munging data. And we'd say, OK, we're going to set up this different role, a different process, these pipelines so you don't have to do that. And they would fight us tooth and nail, right, even though that is what they absolutely said they didn't want to do. But ultimately, that is what they were being rewarded and recognized for.


KIMBERLY NEVALA: So there was this weird dichotomy. Is that part of the issue with resistance today? And then where are you seeing or what is causing resistance, particularly with more advanced analytic solutions?

GANES KESARI: Yep. Resistance at two kinds of audiences. I think we are talking about the people who get involved in implementing these solutions, like the data engineers. The technology team resistance to even embracing these. Earlier, we were talking only about resistance from the end users.


GANES KESARI: But today, we are clearly seeing that AI is eating into a lot of the activities of data scientists, data engineers, and a lot of technology roles. That's the reality. So we'll talk about resistance from both those audiences.
Firstly, from the end users, there is resistance because of two main reasons. One, a lack of awareness as to what could happen. So would it completely automate away my job? For example for a person sitting in the call center or is there some AI tool which is going to help them answer calls better, provide real-time analytics on what the customer is thinking about. So when we talk about that audience, if they don't understand the AI capability or what it is all about, there is resistance that it might automate away their job. Or there is a fear of the unknown. So that's the first part.

And the second challenge I've seen in organizations is there is a need for reskilling. Yes, you get your job. You get to hold your job, but you have to evolve and do a few additional tasks. There's a change in the skill mix. That change also people hate. So there is the moving away from the familiar, comfortable tasks.
That calls for change management.

The first one calls for education. It could be data literacy and telling them, this is what changing and this is what AI means, and this is what analytics means. Whereas the second one is a change in skill mix, a change in responsibilities, which needs to come in from the leadership, organizational-- the career path and design that, this is why we need it. The organization has to say, again, this is no longer a question of innovation but survival. For organizations to be able to stay competitive, they need to embrace certain technology, certain solutions. And for that, the people have to upskill and then change their job mix.
So those are the two challenges and likely solutions.

Whereas, when it comes to the technology team it’s a little similar. Where the second part in terms of change and skills is what you were hinting at. That, yes, we've always been doing it. We are really good at it. But if AI is coming and taking that part of it, then what do I learn? And should I do that? So there, again, I see a similar aspect playing there, Kimberly.

KIMBERLY NEVALA: Mm-hmm. That issue of technologists resisting technology is not one that gets raised very often.


KIMBERLY NEVALA: So that's interesting. I think that's important and a conversation that needs to happen. For the end users, to be fair, a lot of times the way that these things are communicated is couched as: you need to adjust so that you can work around the machine. As opposed to: we are bringing a machine to you to facilitate your work. Are we doing enough to make sure that we are developing systems that work for the humans as opposed to adjusting human work so that it's amenable to the machine?

GANES KESARI: I think it's a little bit of both. There are cases where you completely redesign. For example, Ant Financial versus a traditional bank. That's an example, a case study in Harvard, where they talked about how Ant Financial started late. They didn't have the challenge of legacy and processes which have been running for decades. They were able to come up with a completely new process, which is most efficient for customer service in the financial space. Whereas organizations like, say, JPMorgan or Chase, they have a legacy. They can't completely rewire the systems. So you'll have to look at the legacy of the organization and how much of it is possible. If possible, then redesigning is going to bring in a lot more efficiency, but it's going to be very disruptive. So that's one aspect.

And the other part of it is just in terms of upskilling the people. Given that this disruption is happening, or if there is a change or displacement, how do you get them to do these additional incremental set of tasks? And it's fascinating that we didn't hear much about technology roles getting displaced. We are seeing that in the last couple of years. If I look at three or four years back, mostly with technology teams, the conversation was, there's a lot of resistance from users but this is so beneficial.


GANES KESARI: I've had conversations with data scientists about that: why don't they see the benefit? But today, when you have, for example, GitHub Copilot. Or you have many of these other design acceleration tools which use AI and with prompts, you can create some of those dashboard designs. So now the narrative is shifting inward. If this is so powerful then [LAUGHS] why can't we embrace it? And what does it mean for some of our technology roles? I think that conversation is shaping up. And we'll hear more of it.

KIMBERLY NEVALA: I wonder if that will result in greater empathy for [LAUGHTER] broadly for users, for those impacted by or that these systems are being used for and/or against. But it's interesting that it had to come home to roost in our own area first.

GANES KESARI: [LAUGHS] That's true. Yeah, that's true. It's a cycle, right? It comes back always. So that we need to see.

And one thing I no longer agree with. I used to pedal the same thing earlier, the same message. Which is, the drudgery is getting automated away so you can spend time on creative tasks. I don't agree with it anymore. At least looking at it, I've changed my position.

Looking at the kind of tasks that AI is getting good at - a lot of these creative tasks it is really good at. I've been using some of these generative AI tools for the last couple of months for ideation, bouncing off ideas. If I'm running workshops, what should be the narrative? What should be the outline? I've been using it for brainstorming, which is completely unimaginable, right? I wouldn't have thought about it six months back. I used to do it with people earlier.

So I don't think just the drudgery is getting automated away. A lot of juicy creative work is also getting automated away. Which calls for thinking on what kind of skills are we going to need-- perhaps the critical thinking skills, review, and still that empathy and human element. There are still skills, but the mix of skills that we thought earlier will be important, I think there's a big change. And as we see the AI capability shape up, we'll have to see what stays relevant for us.

KIMBERLY NEVALA: This may also come down to us thinking about what it means for humans to find work meaningful - that was a terrible alliteration there. In that there may be aspects of things that we could automate or augment that we fundamentally decide not to. Because, by virtue of doing so, we create organizations, or environments, or workplaces that are not engaging, are not fulfilling, do not allow us to think and flex our creative and mental or physical muscles in a way that actually drives human satisfaction.

GANES KESARI: I doubted whether organizational leaders would agree with that. Because if it comes down to economic profit-motive, they will probably create alternate avenues than messing with their bottom line. [LAUGHTER] So I'm not so sure about that.

KIMBERLY NEVALA: Oh, yes, no. That might have been a general musing in my rosy, rosy sky over here. [LAUGHTER]

GANES KESARI: No, we hope, yeah, we hope that there are enough activities. But again, [LAUGHS] I will have to say that we'll have to watch and see how things evolve.

But I still strongly believe that there will be new rules and new skill mixes which evolve which we will find fulfilling. As we've always done in the last 80-90 years. That gives me hope that, even if some of these creative tasks which we find attractive are automated away, there are going to be other adjacent areas, new roles which we can't think of today, which will emerge. Just going back to the same example of internet, the role of web designers or SEO experts, search engine optimization. Some of these are roles we couldn't have imagined 30 years back.

So on a similar note, there will be several other roles which call for working on top of the world that AI is creating which needs other very, very different skill sets.

KIMBERLY NEVALA: Mm-hmm. We had an opportunity to talk to Roger Spitz last year. And he talked about the future of decision making. Thinking differently and developing and honing skills for folks to think, make decisions, and act in the face of change and uncertainty.

You've also said that with analytics writ large, including AI, there may be a change. That there's an evolution to decision making in an organizational context. Can you talk a little bit about what you project happening and how organizations, leaders, and employees need to be thinking about that now?

GANES KESARI: Yeah. I think the discipline of decision intelligence and decision making is not getting enough attention today. But with all the increasing spend in not just AI but other technology - cloud and all of the other technology - there is going to be a huge emphasis on how it is helping organizations make better decisions and leading to top-line growth or bottom-line growth.

So the ROI is becoming all the more important. I hear this almost in all of my conversations with clients today. That, compared to three years back, today they are clearly demanding and they are talking about ROI, which is great news.

So what that also is related to is, how are we transforming decision making? For doing that, we need not just analytics, but we also need disciplines which tackle change, right? And then there are other completely new areas which we need to start paying attention to. Which is behavioral science, psychology, which also talks about how humans work within an organization, how groups of humans collaborate. So a combination of all these things, which is where I'm seeing that many of these new skillsets blending into AI. Whether it is in terms of understanding consumer behavior or it is in terms of driving adoption within an organization or quantifying the ROI by running experiments.

All of these are very different skill sets. Someone who comes from a traditional data and analytics background just won't be able to do these. So you need a combination of skills. That is what I think is becoming important and will be important in the future.

KIMBERLY NEVALA: It's interesting, too, because this topic of decision intelligence. I just had the opportunity to run a really fun hybrid of a workshop-roundtable on decision intelligence. And one of the things I'm seeing from technology companies is this thought that decision intelligence is essentially what I said is ‘AI plus BI’. So it's automating analytics or automating BI. And I think that's not the right frame.
I understand why automated analytics or automated BI is important and valuable. But when we talk "decision intelligence," I would much rather or suggest that folks think about this as ‘insight plus behavior’.


KIMBERLY NEVALA: It's really thinking through what is it that we are trying to achieve, what's the outcome? Starting from there, what are the thresholds or the reasons why I would make a decision or take an action based on A and B? And deciding some of those things before we create a model. And then (thinking through) why will someone engage or not engage properly, if at all, with that system? And where will it go wrong, despite our best intentions? Is that a fai…

GANES KESARI: Yep. Absolutely, yep. So if we break it down, decision intelligence, there are three disciplines.

The first is cognitive science: understanding the target audience, understanding the decision makers and human behavior. That's the first part of it.

Second is where, once you have that understanding, data analytics comes in. You collect the data. You analyze. You identify insights.

And the third element is organizational intelligence, which tackles the change management part of it. How do you institutionalize a solution within an organization and scale it up.

So you need all these three - cognitive science, data analytics, and organizational intelligence - to come together for scaling decisions in an organization, which ties back to what you were mentioning, Kimberly.

KIMBERLY NEVALA: What is your counsel for organizations and leaders in particular who still feel this need for speed and say, wow, this feels like a process, right? And it is a process.


KIMBERLY NEVALA: And it takes some time. What is your response to that? And why is it important to take that time?

GANES KESARI: So I've had this conversation with some leaders. I totally agree that, yeah, all of this feels like a lot of work. Yes, because it is. And why can't we spend more on data analytics and then--

For example, I'll give you this interesting story where I was talking to the leader of a manufacturing firm. And he said, we want to use AI for optimizing our yield. Then when I looked at the team and I spoke to the team to understand where they are - even before suggesting the solution - I understood that they were using a lot of paper and pen. They had spreadsheets at best. They did not have anything close to a warehouse.
And in this scenario, if you go with not just AI - even some form of advanced analytics could be even statistics or machine learning, so some simpler forms - you would end up with great insights on data which no one believes. [LAUGHS]

So in this conversation, I asked the leader, let’s say in your organization, tomorrow, we build this solution. And you come up with a fabulous insight which your team did not know. And this calls for a major change in your production process. Would you implement it? He thought about it for a while. And then he said, no, I would hesitate.

So then I probed further: why would you hesitate? And he said, yeah, maybe I know that at times there have been…there's a lot of back and forth on the numbers. So perhaps I would go back and ask the team to check the numbers to see if it is right. So then I said, why, probing further. And then finally he said, I don't trust the data. And I said, there you go. So you need data quality before you double down on analytics and AI.

If you don't focus on the foundation- the data engineering, data quality - and then you step up on analytics, you end up with…I've published a two-by-two based on this conversation, where you have quality of data-- "low," "high--" analytics-- "low," "high." If you have high-quality analytics and low-quality data, you end up in a zone which I call the zone of disillusionment. Where you come up with something but you don't make the decision. The technology team is frustrated that they poured their hearts into creating something no one believes and no one uses the solution. And at the same time, the executives, they've spent money on it. They're not getting the bang for the buck. So that is a zone you need to stay away from. Which explains why you need to take some of that time, build a foundation, and then scale it up as you go.

KIMBERLY NEVALA: Mm-hmm. Mm-hmm. And when we talk about communicating the value of taking that time in the process. Or even just the value of analytics and AI. Both to incent interest and to help organizations figure out when and where to invest and when not to invest. Are there some key nuggets that you provide to folks to guide that conversation?

GANES KESARI: Yeah. I think picking the impactful initiatives. We've created a framework around that; how you can systematically identify impactful initiatives. And this can be done by every organization, where you start with the business goals, the business priority. Don't go with what's interesting but go with what's strategic and important enough for the business.

So in early conversations, when I'm running these workshops, I don't talk about data or technology. I don't introduce technology at all. I talk about the pain points of the users. Start with that. And identify what is it on the organization's strategy this year, what are the outcomes they want to enable. How do you identify what are those challenges - business initiatives - which will get them there.

And then, for those initiatives, you find out what technology intervention is needed. Whether you need data analytics or if some other technology intervention is good. If it is data analytics, then what is the level needed? All of that comes after that. So starting with the business priorities and then quantifying the business impact, what is the likely top-line or bottom-line impact?

And then the third and final element is feasibility. Do we have the technology? Do we have the people or the tools? Will we be able to manage the change? Some changes could be really big such that the feasibility is low.

So if you have these three elements: alignment with strategy; number two is quantifying the business impact; and third, the feasibility from a technology and change management. This will help you place the initiatives and prioritize. You can build a roadmap based on that.

KIMBERLY NEVALA: [LAUGHS] Technologists everywhere who come with this sort of tech determinism or tech-first approach are going to be shaking their fists. But it reinforces the point that even something as exciting as AI - define that how you will - is still a means to an end. And the end is business action, business outcome.

GANES KESARI: Absolutely, yeah. Yeah, very true.

KIMBERLY NEVALA: So you do a lot of work across industries and have the opportunity to not only see what is being sold and told in the public sphere but what is really happening on the ground in organizations where the work is happening, or not, as the case may be.

As you look forward over the next couple of years, what do you expect to see? And what advice would you give organizations and individuals to start to engage now?

GANES KESARI: Two points.

One, I would advocate strongly for a focus on enterprise value and ROI. We are not seeing it enough. And for doing that, you always start with the outcomes, pick the metrics, and then define how the initiative is likely to lead to that. Make that connection, attribution, and then quantify that by tracking over time. So that is the most important recommendation.

And number two is, rethink the skill set of your teams. Particularly with the advances and technology advances in AI, what skill sets you need. And it's not just the technology skill sets. But also, we talked about behavioral science. And we talked about design for storytelling, which can be a very powerful way to translate those insights into actionable recommendations. Bring in storytellers, data storytellers.
So looking at the mix of skills needed in our team-- be conscious about that. You don't need just those three or four technical skill sets, not just that anymore. So those would be my recommendations, Kimberly.

KIMBERLY NEVALA: I just can't help it. I have to ask another question. You mentioned storytelling. And we started the conversation talking about narratives around AI. When we talk about telling data stories or storytelling, folks may in some cases think this is really about coercion, or changing someone's mind, or nudging them. But I don't think that is the intention. So what does good storytelling look like? And what should storytelling never be about?

GANES KESARI: So there is the propaganda type of storytelling. When you have data visualization or infographics, it all depends on the insight that you want to convey. And if that has an ulterior motive and you're just hiding all the other data, all the other supporting facts, and just driving users towards that, you can very well mislead with data and with stories. Ultimately, it depends on what is the intent and what data you're trying to present.

Data storytelling in its truest sense is about helping users access the insights to make some decisions and using that to solve some business challenges. That's what data storytelling is about. So whether it is an AI model or it is an insight from a pivot table in an Excel, the user really doesn't care about it. All they care about is presenting that insight in a format that they can act upon.

For that, you need to understand the business context. You need to come up with the right insight which answers the question and then present it visually with a narrative. A narrative is sequencing this as a plot to make sure that it is easy to relate to and it emotionally connects with the audience. So back to your question, you can mislead with stories, but it ultimately depends on the intent. And the responsibility lies with the creator.

KIMBERLY NEVALA: Like everything else: responsible storytelling in the service of responsible analytics and AI. I love it.


KIMBERLY NEVALA: Thank you, Ganes. I really appreciate you coming today and sharing a very frank and grounded perspective on all of the fantastical stories that are swirling around things like AI, ChatGPT, et cetera, et cetera. Ad helping us understand what the really foundational nuts and bolts are to ensure that we are deploying these things in a way that's responsible, gainful, and drives value for everyone involved. Thanks again.

GANES KESARI: Yeah. Thanks for the interesting questions and wonderful chat. And thanks again for the invite, Kimberly.

KIMBERLY NEVALA: Excellent. If you’ve missed any of our stellar guests – or want to catch up on a new favorite such as Ganes - now is the time. We’ll be back soon with more ponderings on the nature of AI and its implications for all areas of our lives from education to healthcare and caregiving to how we socialize and work. Subscribe now so you don’t miss it!