AI is changing how we work, but the real breakthroughs come when organizations rethink their entire foundation.
This is AI-First, where Box Chief Customer Officer Jon Herstein talks with the CIOs and tech leaders building smarter, faster, more adaptive organizations. These aren’t surface-level conversations, and AI-first isn’t just hype. This is where customer success meets IT leadership, and where experience, culture, and value converge.
If you’re leading digital strategy, IT, or transformation efforts, this show will help you take meaningful steps from AI-aware to AI-first.
Ranjit Pookkottil (00:00):
Many people are confusing access with adoption. They roll out these tools and it's assumed that transformation will automatically fall without the training, without the workflow integration and tying them to the real business problems. It becomes like a discretionary usage. It's not real operational change.
Jon Herstein (00:21):
This is the AI First Podcast hosted by me, Jon Herstein, Chief Customer Officer at Box. Join me for real conversations with CIOs and tech leaders about re-imagining work with the power of content and intelligence and putting AI at the core of enterprise transformation. Here's a question worth sitting with. Your company has probably launched an AI pilot, maybe more than one, but how many of those pilots have actually changed the way your people work? Six months later, a year later, not the demo, not the announcement, the actual behavior. That gap between AI that gets deployed and AI that sticks is what we're digging into today. I'm Jon Herstein, Chief Customer Officer at Box, and welcome to the AI First podcast. My guests today are two people who are living this challenge from the inside at BTS, a global firm that helps the world's leading organizations build the capabilities they need to execute strategy.
(01:21):
Peter Mulford is the EVP and chief artificial intelligence officer at BTS. Peter spent decades building transformation capability inside one of the most people- focused firms in the business, and he has a clear-eyed view on what it actually takes to make AI adoption real. Ranjit Pookkottil is BTS's global head of global IT and compliance. He's the person who has to make the bold AI vision work in practice across global infrastructure, regulatory requirements, and the very real constraints that product teams often underestimate. Together, they bring us a rare combination, the cultural and strategic lens, and the operational and compliance lens in the same conversation. So let's get right into it, and I'm very happy to welcome Peter and Ranjit. How are you guys?
Ranjit Pookkottil (02:11):
Very good. Nice to meet you.
Peter Mulford (02:12):
Doing well. Yeah, likewise. Great to be here.
Jon Herstein (02:15):
So what would you say excites you most about enterprise AI right now? And let's go beyond the hype and the headlines, and what are you actually seeing in the trenches that gets you really excited from the perspective of trust and stakeholder inclusion?
Peter Mulford (02:27):
When you frame it that way, honestly, I think the thing that excites me most, moving beyond the capabilities of the technology itself, which are, of course, are amazing, is
(02:40):
What happens to people's sense of identity actually when the technology works. What I mean by that is, unlike any other technology with AI, something amazing happens in the space where people interact with and react to using the things. And the best way to describe it is when a team actually gets it, when they genuinely reinvent a workflow and not through a training program, but actually rebuild the way they work from the ground up. Literally, something shifts in the way they see themselves professionally. It's not simply that they high five and say, "Wow, that was really cool." They moved to this place where they feel, for lack of a better word, augmented in a way that makes the stuff they used to know more valuable, not less. And it's the kind of thing that we really try and get participants and executives to feel down to the balls of their feet, because once you get there, you become a lot less concerned about, for example, being replaced, which is something that people worry about now.
(03:50):
So that I think is the thing that excites me most. You don't get it with everyone all of the time, unfortunately, but when you do, it's surprising and exciting, frankly.
Jon Herstein (04:01):
Is there a use case that you're particularly proud of, something you all have implemented already where you've made a change operationally and it actually changed behavior that you could see a measurable outcome? And what did that look like?
Ranjit Pookkottil (04:15):
So the sales enablement team, for example, they embedded the AI directly to their account workflow. One of the things, previously account managers, they would be looking through their Salesforce, checking LinkedIn for job changes, checking if any of their contacts are moved, drafting the outreach emails. It's time-consuming. It takes a good amount of their time,
(04:36):
Wish they could use elsewhere. So we built an AI-driven workflow that cross-referenced the Salesforce data with LinkedIn signals. It pre-drafts these emails and drops it into the account manager's mailbox. So all they need to do is go in there, refine it, personalize it, and send it out. The impact was not just efficiency, it just changed the whole behavior, but the managers could reach significantly a lot more contacts in a day with higher consistency, better timing. They know about changes, people moving as soon as it happens. And an added benefit is really improved the CRM hygiene. The contact data is much fresher because the updates are driven by the workflow, not from people's memory of, "Hey, this person moved a month back or whatever." Basically just shifted the effort from searching to really selling.
Jon Herstein (05:34):
What have you guys learned about the sort of minimum viable, if you think about AI plus the methodology that turns AI into a really scalable enterprise capability. So frameworks, decision points and feedback loops, what are the things that the ingredients that need to come together for it to become really valuable? And I guess what I'm trying to get to is the next click down, which is what are the key elements that make it real? So is it the methodology? Is it frameworks that you've built, the feedback loops, quality assurance? Just what are the things that you've found really matter for going from something that's a nice idea and a cool demo to we've actually changed the way that business works?
Peter Mulford (06:11):
It's probably a lot less surprising for your listeners than you might think. I mean, to some extent, it's good old fashioned change management of any variety. I mean, the first step is you want to be clear on what's the problem you're solving or what's the problem area that you're aiming at. And the role of a leader in this space is to provide input clarity or criteria clarity. It's what's the domain of problems we're trying to solve and then what's the minimum threshold for it to be worthwhile. So it's the difference between saying, "Hey, I want everybody to get better, more personally productive with the way they do emails to something at a higher level of analysis, which is we really need to move the needle on this margin in this part of the business, or we really need to shorten the cycle time it takes us to do X to Y in this space." So part of it is clarity.
(07:11):
I mean, real brutal clarity on what's the problem we're trying to solve for which we think using AI might be the solution. Then the second piece, the two of three I would say is if clarity is the first, humility is the second. And this is hard, I find for a lot of leaders, just being okay with the idea that they don't have the answer and that their team are likely to be the ones who are going to come up with the answer. It's not going to come from them. They're not going to save the day. The answer will come from someplace else and they need to be okay with that. And you might think, well, isn't everybody all the time? Actually, a lot of people aren't much of the time. And then so you've got clarity, humility, and then the last thing is rapid learning loops.
(07:59):
Literally, you have to create the space in which people can quickly test and learn, just like you always do with innovation everywhere. It's just with AI, it takes on a whole different kind of feel in terms of the speed, the variability, and of course the surprising risks that can come along for the ride. But that was a lot of me talking, but if you remember just three things, the answer to your question really is a combination of clarity, humility and rapid learning loops. And when you get those right, amazing things start to happen in terms of what you can do with AI. And the rapid learning loop shows up in team's ability to keep aiming higher, which is to say, if you start with a relatively straightforward problem set and then sit people at it, they realize, oh, this is amazing, then you can reach higher, reach higher and reach higher in that kind of pattern.
Jon Herstein (08:56):
And I love the simplicity of that framework. I want to go back to clarity just for a second because what would be helpful to understand on that clarity, you said, be clear about the problem you're trying to solve. How important is it that the outcome of that is measurable and that you know what it is upfront versus we want to improve productivity or we want to hire employee satisfaction or whatever it is. So when you talk about clarity, is it just the clarity of the problem or exactly how you'll measure it and what outcome you're looking for in terms of that?
Peter Mulford (09:26):
That's a great question, Jon. And the answer is it depends on where you are in your AI maturity. And what I mean by this is if you're just starting out, if you've just put a box AI or Copilot or Gemini or Anthropic, whatever in everybody's hands, if you're just starting out, then really the measure that matters is adoption. And I know this will drive some CFOs crazy because they want to get to ROI as quickly as possible, but it's this notion, you've probably heard this old chestnut, a thousand flowers blooming. The measure you hear about is adoption. And you just want to watch this closely because I can tell you some amazing stories about the things that people have done that have accidentally caused adoption to go down. But once you get past that, if you consider that kind of like stage one where the adoption measures are up, then you're exactly right to notice you want to get to it now what?
(10:24):
Okay, everybody's using it, who cares? So the next set of measures, it gets to good old-fashioned financial measures. So it could be things like, "Okay, everybody, now I want to find a way to reduce cycle time of X, or I want to reduce the time it takes to do Y, or I want to increase the number of new ideas that go into a pipeline by Z." It doesn't matter what it is, but here's where you start creating clarity around the target. And of course it's important to then set people free to go after it. And oftentimes the target, the solution will come from the front line because it's going to come from the people who are closest to the problem, but you do generally need to provide people with a sense of where it is we all should be aiming in terms of how the thing you're doing is going to help us with our where to play and how to compete choices and ultimately our financial measures.
Jon Herstein (11:23):
I have to admit, you've piqued my curiosity with your point about things that unintentionally drive down adoption. So maybe we can circle back to that one, but I'm very curious, particularly from the standpoint of, are there lessons that others can take away from that to avoid that problem?
Ranjit Pookkottil (11:38):
You start with the adoption, right? You give people the easy wins, which are more personal to them "What do I do on my day-to-day basis, which can be improved by an hour or 30 minutes of saving or whatever?" That's the adoption cycle, but really in my role, I'm looking at scaling it. And so for me, the interest is more on how to operationalize it, I would say. And for that, you're starting the other way around, not with the tool, you're starting with asking the teams, what is the friction in the day-to-day work? And you work on designing the end-to-day end-to-end workflows by using AI in the systems they already use, where are you working today? Are you working in Pox? You're working in Salesforce, you're working in other systems. Instead of giving them a ChatGPT and a browser, you want to operationalize it in the systems you use every day.
(12:37):
So AI should live where the work happens. And then we define the measurable goals upfront. What's the goal here for each one of these tasks? Are you reducing cycle time, improving quality, maybe increasing outreach like the sales enablement example or improving data hygiene from the same example? If it's not embedded in your workflow and it's not tied to a metric, it's not truly operationalized, I would say.
Jon Herstein (13:06):
Peter, I want to ask you, you've described AI augmentation as really requiring a combination of things, including application development, infrastructure. And obviously we talked earlier about measurable outcomes, not just adoption, and also the idea of thinking about risk and compliance and the people side, which is critical to all of this. So I want to ask you, if you were advising another CIO who's launching a number of AI programs this quarter, what would your implementation blueprint sort of advice look like for them in practice? Things like who owns it? Who's responsible? How would you sequence the work? It's a big question. And then really importantly, what gets measured and how soon? So just advice for others on how to approach something like this.
Peter Mulford (13:48):
Well, what's interesting is as I was watching you and listening to you and Ranjito back and forth, you maybe think of something that I think might be a nice on- ramp to your question. And the thing is this, and I'm sure your CIO listeners know this, it's large organizations undergoing AI transformation, or frankly, any kind of digital transformation, but especially AI transformation, they're typically having two kinds of conversations simultaneously. One conversation is about technology deployment, and this might be your CIO listeners. So which tools, which platforms, which vendors, governments, tech stacks, whatever. But then there's another conversation that should be happening and probably is, but often in a different room, and that is around people capability. So who needs to change? How fast? What does good look like? And so by way of answering your question, I would say the first thing is that the people responsible for both of these conversations need to notice that the other one is happening or needs to happen.
(15:02):
And the second thing is they should be happening together because the conversations need to be happening simultaneously, they ought to be happening in the same room. And when you do that, you'll notice a couple of things. And by way of a blueprint, you'll notice, for example, question number one, if I'm going to take a blueprint and express a series of questions that you need to ask and answer together, the first one will be, "Okay, so how are we going to communicate this from the top?" Everybody knows listening to this, that transformation like this is buy-in from the top. So what does that look like and how is it expressed? Check. Step number two is who and how are we going to manage the transition through AI maturity that I described before after we first figure out where we are. So broadly speaking, if we're at the early stages and the KPI is adoption, okay, who owns that KPI and how do we measure it?
(16:03):
If on the other hand, you're further along and you're saying, "Well, now we need to reach higher." And there's a sequence of KPIs. Who owns them and how do we map and measure it? That's another set of conversations. And then finally from there, you just want to decide how do we go about figuring out that we've achieved a KPI and then reaching higher, or how do we figure out that we reach for a KPI and in that learning low something failed and then redirect or kill it and move on classic innovation. All of that is to say it's standard change management and innovation management, if you think about it. The thing that just makes it a little bit different is there's two conversations between the CIO and say head of capability that usually happens separately, happen to happen at the same time. And you have to notice how the geometry of the conversation changes as AI maturity changes.
(17:01):
You can't just take a standard change management model and scale it forever. You actually have to keep switching it up as the capability and maturity switches. But
Jon Herstein (17:11):
You're saying, I think at the core of it, a lot of this is just change management and the way we've traditionally known it, which is you've got to organize around getting people to do something different tomorrow than they do today. And a lot of that's driven by bringing the stakeholders together, making sure that there's trust there and they sort of understand what the outcome is we're looking for, but doing it in a collaborative way as opposed to two different groups going off and doing it.
Peter Mulford (17:35):
It's a great playback with two twists I would add. So the first twist, you're
Jon Herstein (17:39):
Quite
Peter Mulford (17:39):
Right to notice that the conversations need to happen at the same time. And this is to say simplistically, AI is a team sport, not a tech sport. So if you're a CIO or a CTO that has been told this is on you or imagines that this is on you, you want to switch that up. It really does require ... It's a technology that requires a diverse set of inputs in order to scale versus other tech stacks in the past, which quite frankly, do it with the IT team and you're fine. That's not true anymore. And the second thing I'd say that is a little bit different is you have to notice the speed with which capability decays. It's traditional change management from Kotter on down assume the kind of a window of time where if you've got the change in place, it could last reliably for two years, five years, 10 years, whatever it is.
(18:41):
That's no longer the case. So the problem here is you're not kidding when you literally say change has changed, it's these constant cycles. Right at the moment you think you've got the organization running in the right direction, do you notice it's not the right direction anymore? So that is genuinely different. And I know it sounds a little bit at risk of sounding like a fabulous. I know this is the sort of thing you would expect a consultant to say, but in reality, there's just a mass to it that you can measure and you can see unfolding all the time.
Jon Herstein (19:10):
It's hard to have an AI conversation without talking about governance. So I want to ask a little bit about governance. And maybe Ranjit, I'll kind of start with you, but I'd start off by making the statement that governance isn't just, it's just not just a set of policies, it's actually a whole living system, particularly again with the pace of change that we're talking about. So when you think about this area, I'm sort of curious what some of the compliance or risk constraints that product teams are underestimating in rolling out AI and how do you deal with the mitigation of that? And what I mean is it's your responsibility to create guardrails, but doing that without killing speed of innovation and execution and time to market and all those things. So I would imagine a lot of this falls on you to figure out, how do I balance these things?
(19:57):
So what have you learned? What advice can you give folks? What's working, what's not working?
Ranjit Pookkottil (20:01):
Balance the governance speed, I would say. That's one thing to not to just be saying, no, I think really that's what it is. We don't start with the heavy governance. We start with a lightweight minimum bar at least. We put a minimum bar. Let's say whatever experiment you're doing, whatever application you're pre-in, there should be a few non-negotiables. I would say the data sensitivity, I would say the cross-border exposure, security posture of these companies, that's going to be a negotiable, a non-negotiable. Like for example-
Jon Herstein (20:41):
Training models, I assume also would be one. Yeah.
Ranjit Pookkottil (20:44):
I mean, so those are the things we ask for right away. Let's say if it clears that, then the teams move fast. Where it actually creates a friction in this whole process is letting something spread virally and then pulling it back. I would say late governance has been more painful. It's better to say no early rather than letting it run for a long time, then letting it spread throughout the organization and then try to pull back. That's one thing.
Jon Herstein (21:19):
Well, and can I just say that if you catch it early enough, you might not be saying no, you might be more course correcting and saying, "You need to think about it this way, not that way."
Ranjit Pookkottil (21:27):
That's exactly. You channel the energy into the safe leads. Guardrails are what allows you to go faster, frankly speaking, that's my belief because you waste a lot of time otherwise in experimenting and then we pull it back. So that's the part about the speed and how we can control it right from the beginning about underestimating the risk, the question you said about under what are some of the risks which are underestimated by a lot of companies. And this is something which I constantly fear with a lot of new system is that data boundaries, especially in a company like PTS in professional services, right? We work across multiple industries and we also work sometimes with direct competitors. And if AI systems aren't designed with strict data separation, access controls, I think there's a risk of cross-client contamination even unintentionally. The speed without the data discipline will be a liability.
Jon Herstein (22:33):
We've covered value, and I always like to, towards the end of the conversation, talk about three things that I care about a lot in customer success, which are value, which we talked about a lot, culture and change and then experience. And on the culture and change side, you teased us a little earlier, Peter, with this idea of there are some things that you can do that actually negatively affect adoption. So maybe we can touch on that briefly because I think we've talked about some of the positive things, but what could people avoid doing that actually may be hurting them from an adoption perspective of these capabilities? Yeah,
Peter Mulford (23:03):
There's a big one. We had a conversation not that long ago with a client in Singapore, one of the big companies out there that's doing a lot of great stuff with AI. And this chief executive's observation was that getting AI is kind of like jazz, Jazz music. And this idea really captured the imagination of our own CEO or Jessica and has become central to how we both teach clients to work with AI and do it internally. What I mean by this is that Jazz, it has a couple of really interesting features that are actually repeatable at the level of a company they are. There's a lot of improvisation.
(23:43):
Groups of people working together, they all come at it with different skills, in this case, different instruments, but they're quickly able to collaborate and pivot and move and improvise. And the funny thing about it is when you do it right, a lot of joy comes along for the ride. It sounds the way it sounds, but actually it's a key ingredient and it's a key byproduct of what happened when you get this right, which goes back to my first observation about how it changes people's identity. Now to take this point, Jazz, and to make it more specific and perhaps operationalize it, I would say, and also to point out the real answer to your question is what do people do that kill all of this? You can think about it this way. In general, there are two types of leadership involvement in a transformation like this.
(24:39):
The first type is what I would call a kind of operational command and control. This is a leader who says, who gets to experiment, what tools they use, what approaches? And there could be lots of reasons for that. They might imagine themselves to be an expert at AI. Maybe they have chief in their title or head of in their title and they're well-intended, but that type of experimentation, the kind of top-down control kills the jazz. And by that, I mean, it kills authorship, it kills ownership, it can stifle collaboration, bottom-up energy, and the kind of bottom-up energy you need that produces real breakthroughs. You compare that to what we talked about earlier, the second type of top-down involvement, which is criteria clarity. If we continue with the jazz analog, it is the case that someone has to play the first beat. Somebody's going to get the band together.
(25:38):
Somebody's got to say, "Okay, now we're going to do this set." What that would mean in a business context is you need to be clear on the problem area we're gesturing towards what the threshold performance needs to look like, how many players there are. But that said, so that was a lot of me talking. But if you remember one thing, Jon, I would say if by way of analogy, jazz is the kind of environment you're trying to get to, then it becomes clear the things leaders can do, often well-intended leaders to kill it, and it's by crushing it with too much top-down prescription.
Jon Herstein (26:13):
Yeah. Ranjit, I actually want to end with you. My last question for you is a little bit more on the experience side of things and the experiences we're trying to create, but specifically when you think about the various AI initiatives that may be kicking off, whether they're sort of bottoms up or tops down, how do you decide whether these initiatives or a specific initiative is worthy of scaling and putting into production and really going big with versus, "Hey, that was interesting. We learned a lot, but we're shutting it down."
Ranjit Pookkottil (26:42):
I think the scaling decision comes down to integration with all our systems, measurable impact. Thirdly, I would say governance maturity. So I would say those would be three criteria I would think about. If we can't integrate cleanly with our system, like I said before, it needs to happen where the work happens. It won't sustain usage. That's one piece. And if we don't see measurable improvement in workflow metrics, it'll stay a pilot, it's just not going to be a pilot. And if it cannot ... This is really the biggest thing because we need to be working with our clients on this and we need to keep that data safe. So if it cannot operate within our governance boundaries, we just need to shut it down. Then we can scale systems, but we cannot really scale experiments.
Peter Mulford (27:40):
Rajiv, feel free to push back on me if you think this is wrong. The way you worded the question, it sounded like it's binomial. You seem to say, when do we scale and when do we quit? I would simply add for many use cases, not all branding, but for many, there's a third option, which is a redirect. In fact, the language we use externally is ARC. You have advance, redirect, or cease. And no magic there that I'm not ... BTS didn't invent that. It's been around since the days of Alex Osborne, but I think it's important because it could be that you're doing something that isn't working quite the way you are. And therefore, because of that, you might conclude, "Well, let me just kill this and move on. " When in fact, it might be gesturing at a new and completely different use case to go after.
(28:30):
And the problem is if people tend to think about this as scale or kill, they won't even be looking and they can miss the magic in the detritus, if you will. So that's, again, not original thinking there, but important in this context because I think a lot of the magic gets lost when things aren't going exactly the way people expected.
Jon Herstein (28:54):
I think it's a great sort of reframing of the question because you're right, it's not as simple and as binary as yes or no. Peter, Renji, thank you so much. This has been very enlightening, very candid. I appreciate that. And we focused on some pretty challenging things like failure modes, like governance gaps, like things that leaders should be thinking about differently than maybe they are right now, and that's precisely why we do this. So I really appreciate you sharing. And for those out there, if you're listening and you're in the middle of an AI initiative right now, whether you're trying to get it off the ground or trying to figure out whether it's worth scaling or whether you should pivot, as we've talked about, hopefully this conversation gave you something concrete to think about, take back to your organization and learn from. And finally, I'd say the question I'd leave you all with, is the thing that you're building, is it theater or is it actual transformation?
(29:43):
And the answer's going to be in the outcomes and the changes in user behavior. So really focus on those two things, tweak and refine. And if you found this episode valuable, please follow our show wherever you listen, it's really the best way to make sure you don't miss what's coming next. And if something thing in today's conversation resonated with you, please share it with someone, a colleague in your company outside who's wrestling with some of the same questions. These are the kinds of conversations that we're helping move the industry forward and really allow our listeners and our customers to take all the innovation that's happening out there in the world of AI and make them practical and applicable to their businesses. Well, thanks again, Peter Renji, for all of your insights. I really appreciate it. And thank you all for listening. We'll see you next time.
(30:28):
Thanks for tuning into the AI First Podcast, where we go beyond the buzz and into the real conversations shaping the future of work. If today's discussion helped you rethink how your organization can lead with AI, be sure to subscribe and share this episode with fellow tech leaders. Until next time, keep challenging assumptions, stay curious, and lead boldly into the AI first era.