AI is changing how we work, but the real breakthroughs come when organizations rethink their entire foundation.
This is AI-First, where Box Chief Customer Officer Jon Herstein talks with the CIOs and tech leaders building smarter, faster, more adaptive organizations. These aren’t surface-level conversations, and AI-first isn’t just hype. This is where customer success meets IT leadership, and where experience, culture, and value converge.
If you’re leading digital strategy, IT, or transformation efforts, this show will help you take meaningful steps from AI-aware to AI-first.
Beena Ammanath (00:00):
The reality is AI is not yet fully reliable in every possible use case, and rightfully so, because you release a model, you've not tested it in every possible scenario. It's just not mature enough to have been tested in every case. So I think that's where it still needs to be human-led and AI powered. You tap into the power of AI.
Jon Herstein (00:25):
This is the AI First Podcast, hosted by me, Jon Herstein, Chief Customer Officer at Box. Join me for real conversations with CIOs and tech leaders about re-imagining work with the power of content and intelligence and putting AI at the core of enterprise transformation. Hello, everyone, and welcome to the AI First Podcast, where we explore how enterprise leaders are applying artificial intelligence in real, meaningful ways, not just as technology, but as a catalyst for change. I'm Jon Hurstein, Chief Customer Officer at Box, and today's conversation is about a shift that's already underway from AI systems that assist us to AI agents that act on our behalf. As enterprises move toward agentic workflows, the questions aren't just what's possible, but also what's responsible. How do organizations build systems that can operate autonomously while still being trustworthy, explainable, secure, and aligned with our enterprise values? To explore that, I'm joined by Beena Ammanath, executive director of the Deloitte AI Institute and author of Trustworthy AI and Zero Latency Leadership.
(01:35):
In this episode, we'll discuss what it takes to build trustworthy AI agents, how Agentix systems are reshaping work, why trust enables speed rather than slowing it down, and what leaders must do now to prepare for the next era of enterprise autonomy. Welcome, Beena.
Beena Ammanath (01:51):
Thank you so much for having me, Jon . Excited for our conversation. Looking forward to it.
Jon Herstein (01:55):
I am as well. And we're going to go very deep on a couple very key topics around AI and particularly applied AI, which I think is really, really, really worth the sweet spot is for our audience. But let me start a little bit more with your background. So tell us a bit about you, what the institute is, and then we'll talk about your books a little
Beena Ammanath (02:12):
Bit. I started out my journey as a computer programmer. I studied computer science and have worked in a variety of industries. Currently, I work at Deloitte, but prior to this, I was leading data science and GE. I was a CTO for AI at HPE. Thomson Reuters, Bank of America, really large enterprises. Even though I studied computer science, and I'm dating myself here back when, early '90s, when I was studying computer science and AI was one of the subjects.
(02:41):
It was so farfetched. We used to imagine personalized ads and what if these things happen? And now it's becoming real. I always focused on the applied side of technology. That's what interests me a lot. I feel like my entire career has been built on the fact on, let me understand how data is used by a telecom company. Let me understand how data is used in a bank and what are the nuances that come with it? What do you think about when you take a new technology and apply it in the real world? And that's exactly what I do at Deloitte. I run our Deloitte AI Institute, which is focused on applied AI research. So all the latest new models that come out from the tech industry, when you take it and apply it in a bank, what are the implications, whether it's to the workforce or a compliance and risk perspective, whether it's a cost, long-term maintenance, the kind of talent that's needed.
(03:37):
There's a number of things you need to think through when you apply technology in a bank. And it can be different when you apply it in a bank versus in a manufacturing environment versus in a pharmaceutical company.
(03:51):
So it's fascinating for me, and I feel like my entire background has been building towards that because that was the missing gap when I was in the industry. I had to figure it out on my own, but how do you bring these best practices together? So we really look to ... We are living in this interesting time where there's rapid advances happening in technology, especially in AI. And there's no single playbook of here's what you need to think about when you apply it in a pharmaceutical company. So we are trying to build those best practices, share it so that every pharmaceutical company, every bank can move faster.
Jon Herstein (04:27):
I love that idea that you're going to be more pragmatic and focused on how you can be ... And obviously in consulting, you've got to think about the value you add to your clients, and it's got to be more than just what does the underlying technology do. It's like, how do you actually get value from this? It sounds like that's what you're focused on.
Beena Ammanath (04:41):
Yeah. Yeah, absolutely. And then obviously we all care about ROI. So how do you get value from it, but what's that cost and value balance?
Jon Herstein (04:49):
Can we talk briefly about your role as author? So you've published two books. Roughly, what are you talking about in those books?
Beena Ammanath (04:55):
Yeah. Yeah, yeah. So the first book is Trustworthy AI, and it's really focused on my experiences in a variety of industries on what trust means for AI in different industries. Completely different parameters you need to think about. I remember in the early 2010s, when AI was just beginning to come in, we called it data science then. And there was a lot of conversation. Most of the headlines was around bias in AI. And here I was looking at predicting jet engine failure or predicting how much power wind turbine would generate. I'm like, no, bias really doesn't matter in this scenario. What really matters is reliability of the algorithm. I really care about how reliable this is, how accurate this is to be able to use it.
Jon Herstein (05:43):
And presumably repeatable as well.
Beena Ammanath (05:44):
Repeatable. So there's different parameters that you need to think about depending on the use case. I think we tend to think about trust as this broad umbrella and a one size fits all, and it's really not.
Jon Herstein (05:55):
Trust in medical sense would be a very
Beena Ammanath (05:57):
Different thing. Very completely different. The need for transparency. You cannot have black box algorithms in a medical environment, but you can if it's your Maps app recommending a path. So
Benna Ammanath (06:09):
It's
Beena Ammanath (06:10):
Really bringing in that different perspective on how to think about trust from an enterprise lens and how do you actually solve for it. So that's a book on trustworthy AI. It came out in 2022 and it did okay. Then ChatGPT happened later that year. And then it's from the next year, from even now, it's always getting sold out on Amazon. So it didn't become a bestseller in the first year. It became a best 18 months later.
Jon Herstein (06:36):
So you were a little early, but
Beena Ammanath (06:37):
It worked out. Okay. And then the second book is Zero Latency Leadership. Again, anchored on technology, but I do think we've entered that era that no matter what kind of business you're running, if you are a leader, you need to know about technology. You need to understand the basics of AI. You need to understand what tools AI tools are out there, how to bring it to your business, but also how to manage talent for AI and the governance and the risks that come with it. So zero latency leadership is really talking about what are the qualities that you need to succeed and thrive as a leader in this new era of technology that we've
Jon Herstein (07:17):
Entered. So not exactly an AI book, but it's very relevant now.
Beena Ammanath (07:21):
It is very relevant now, but it serves as a primer of what I think are the most important technology for the next 10 years in our lives, whether it's AI, quantum space. So it talks about different technologies that leaders should be aware of.
Jon Herstein (07:35):
Okay. So go check those two books out. Now you talked about talent, you talked about governance and we're definitely needing to get into those topics. But before we do, I know that the institute, one of the things that it does is produce an annual report around the state of AI in the enterprise.
Beena Ammanath (07:49):
Yes.
Jon Herstein (07:50):
So would love to hear more about that. How long has that been going on and is there a report out now?
Beena Ammanath (07:55):
Yes, yes. There is a report out now and we'll hopefully dive a little bit deeper into those. We actually have been putting out that report for several years now. When GenAI came into the picture and we were getting headline after headline, we actually switched to the report to look at it at a quarterly basis. So we survey about 3000 executives globally from different industries. So it's not just looking at it just from a technology perspective, but it's looking at applied AI. So we survey about 3000 executors at a global level to see where they are in their applied AI journey, what are some of the challenges they're facing. So really to learn from each other, what are some of the advanced organizations doing to drive their AI adoption and share that in that report so that everybody can learn from it. So we did switch that report to a quarterly basis last year.
(08:52):
And here's the interesting thing we realized, Jon , that you think that there's so much happening in AI, right? Because every day there's a new headline, there's a new video, there is something that's breaking the news about AI. But what we realized is applied AI moves at a very different pace than the technology itself. Technology advancements has its own lane and it's moving in a much faster pace, but taking that AI technology or tool and applying it in the real world, that moves at a different pace, which is much slower. And what the interesting thing we found out is that it moves at the pace of change than an organizational culture
Jon Herstein (09:36):
Allows. Right. And I definitely want to dive into that a little bit deeper with you because that is the thing that sort of, I wouldn't say impedes progress, but it sort of sets the pace for progress. It's not, there's a new model with new capabilities because you can't just unleash that on the organization and say, go take advantage of all this without teaching them how and explaining how it changes business process and all that, right?
Beena Ammanath (09:56):
Yeah. You just can't get that kind of adoption. You can have the coolest new tool, but if nobody uses it, it's a failed project at that point.
Jon Herstein (10:03):
Right. We should probably talk about failed projects too in here.
Beena Ammanath (10:06):
Yes, we absolutely should.
Jon Herstein (10:08):
Okay. So the report's out and it's now back to an annual cadence. Is that
Beena Ammanath (10:11):
Right? It is back to an annual cadence. And we currently, for this report, we've surveyed 3,200 executors from global companies and there are some quite interesting findings, but I don't think any of them are shocking. It still really reflects the need for governance, the need for focus on talent, the need for AI to be still human led and AI enabled. So there is significant advances happening in applied AI world. The interesting thing we found is that there are clear groups that's emerging, the ones who are adopting AI at a faster pace versus the ones who are adopting AI at a slower pace. And much of it comes down on how much they're investing into their workforce.
Jon Herstein (10:56):
Interesting. Okay. And do you think that's a deliberate decision they're making to say, we want to go faster and therefore we will invest? Or is it the other way around where they say, "We think this is important. We're investing in people and that is the thing that's unleashing our ability to go faster." Do you have a sense of which one's driving?
Beena Ammanath (11:11):
I think it's where you don't think of AI just as a technology project. It's where you think of technology and then corresponding change management and process changes that need to be happening to bring the people along where there is proactive thinking put into how do you drive that culture change? How do you make sure that your people are on board are fully trained to take advantage of these tools?
Jon Herstein (11:35):
Right. So you mentioned people- led AI, and I would love to explore that a little bit because it sounds like one of the top findings from your perspective on what to amount the report. What does that mean and what does it mean particularly for applied AI?
Beena Ammanath (11:47):
We've entered this era of agentic AI now. There's a lot of conversation of autonomous agents and agents making complete decisions. And it really depends on the industry that you're in and the use case, but really you need humans still very much in the loop. We are not at a point where AI is completely reliable. We still have AI that hallucinates, generates errors. So we still don't have AI as a technology that's robust enough. And I have very, very strong views or hypothesis on why that is. We should talk about that. But the reality is AI is not yet fully reliable in every possible use case and rightfully so because you release a model, you've not tested it in every possible scenario. It's just not mature enough to have been tested in every case. So I think that's where it still needs to be human-led and AI powered.
(12:45):
You tap into the power of AI. I was at a healthcare conference recently and I was listening to medical professionals, doctors and CEOs of healthcare companies. And you realize in a healthcare scenario, unless your agent is 100% reliable and transparent, it's impossible. Neither should a clinician will not be able to fully trust it and neither should the clinician trust it. They're fully confident that it is consistently providing reliable results.
(13:15):
And that just takes time. So it really depends on the industry that you're in and where and how the agent is deployed.
Jon Herstein (13:23):
But can I ask you this? Do you think that people expect their doctor to be 100% accurate all the time?
Beena Ammanath (13:28):
Not at all. I think it's- But there's a
Jon Herstein (13:30):
Higher expectation for-
Beena Ammanath (13:31):
It's proven that you expect humans to make errors, but you expect your technology to be 100% accurate, but you still can go back to your doctor and ask those questions and get feedback, but you can't have two levels of errors. So I think it's more of how much can you trust. What I love about what AI is doing in healthcare is the accessibility. We all go to doctors and we all have seen the wait times, how much time it takes to get access to a specialist. So if AI can remove some of those initially and help build up that trust, I think it'll definitely move faster. I don't think we are trying to build AI that operates at the human level of reliability.
Benna Ammanath (14:16):
We
Beena Ammanath (14:16):
Are trying to build AI that operates beyond the human level of reliability because we are expecting it to scale faster and much more than a human would ever do. There's this great saying that train your AI like you would train a child. And then there's this new phrase that has started coming out, which I am not at all a fan of that AI is growing up. And like, no, for us to hit AI's real potential, we have to stop thinking of AI being able to do just what humans do
Benna Ammanath (14:47):
Because
Beena Ammanath (14:47):
That's not how it works. I have two college kids now and they make their mistakes in college alone and they grow. And if they make mistakes, all that it's impacting is them and their immediate circle. But with AI, if it makes a mistake, it could be impacting millions of people.
(15:08):
So the scale at which the errors happen is much higher. So I do not like the hyper anything that compares AI to humans because we are aiming for something much bigger with AI and we are using it in that way. So when we downgraded to saying that AI is growing up or think of AI as a child, no, a child cannot cause the kind of damage or has the reach that AI has. So we have to take a step back and think of AI as something, if it makes an error, it could impact millions. And when you want fix that error, you have to go back and fix millions of errors.
Jon Herstein (15:45):
So AI is sort of like a child with parents with incredibly high expectations.
Beena Ammanath (15:53):
I wouldn't go anywhere- Nothing to do with humans. Here's the other interesting thing, Jon . I was at CES last month, and the amount of robotics or humanoids as we saw it-
Jon Herstein (16:05):
Physical AI flow.
Beena Ammanath (16:06):
... physical AI, fascinating. There was an entire flow of robotics that thousands and thousands of different kinds of robots doing different things, from folding laundry to cleaning the toilet, to lifting heavy objects. There were so many of them. And I think I saw at least 15 boats, which was about AI picking up something which had the form of a human hand. Why do we start thinking of replicating human capabilities? In the other case, we were talking about human brain abilities, but even in a physical AI, why does your robot need to have only five fingers? If its job is to pick up something, maybe it needs to look more like an octopus or maybe it just doesn't need to have fingers, right? We needed something else.
Jon Herstein (16:56):
Right. Think about it for what's the task you want it to perform. But I guess you can make the argument that if you want general purpose robots, humans have evolved over many, many millions of years to be very good at general purpose capabilities.
Beena Ammanath (17:10):
Yeah. And it's easier as humans to think of something that we already know, right?
Benna Ammanath (17:16):
True.
Beena Ammanath (17:16):
And- It's a model we understand. Yes, it's a model. We understand. So hands, yes, that picks up. So let's just design it exactly like a hand. And that's what something that comes out in this report as well, tying it back to this, is we've been operating so far on thinking about AI, especially in the applied AI space. Think about more of use cases, right? Start with the use case, do a pilot and move on. I think it's coming more and more to the forefront that you need to get the true power of AI. You have to think about transformation, especially as agents come into play. It's still in the early days with agents, but as we are thinking about agents, unless we think about the true power of agents and everything that can be done, it's not just about form fitting agent into your existing processes, but potentially re-imagining the entire process to see what makes sense for agents to do versus what should humans do or what are the different roles.
(18:19):
I'm excited about this new chapter we are entering with AI where I hear more and more about the transformation that AI can drive in the applied AI world beyond just use cases and pilots, right? Because if you're just going to form fit AI, here's how we've always done expense management or travel booking and we put in AI in bits piecemeal here and there.
(18:42):
You're sort
Jon Herstein (18:42):
Of chipping away
Beena Ammanath (18:43):
On
Jon Herstein (18:44):
The edges of ... Yeah.
Beena Ammanath (18:45):
Yes.
Jon Herstein (18:45):
And I do think my sense is a lot of the purity initiatives that at least the folks that we've talked to on this podcast, for example, have started with are exactly those like, "Oh, I can get 10% more efficient in this process or this part of the process." But very few have gone very far in terms of taking a huge step back at saying, "How could I completely reimagine this thing?"
Beena Ammanath (19:04):
Yes. Are you
Jon Herstein (19:05):
Starting to see that happen?
Beena Ammanath (19:06):
I am absolutely. Since last fall, I've been having more and more of those conversation and the reports really reflect that companies who are taking that step back to think about it as a transformation project versus a use case project are driving much farther. And I think it's also a good opportunity for us, no matter which industry you're in, every aspect of an organization enterprise, the way you're organized, the kind of roles that are there, the C-suite or below. As AI comes into the picture, especially as agents come in, we have to start thinking about, does this role even make sense? What's a better role? What kind of talent is needed? And not look at it just from a specific project, but take a step back, take your C-suite, have that brainstorming session on what can agents do and how should we plan for it from a roles perspective, from a process perspective.
(20:01):
It's not just about continuing the way things are with minimal efficiencies. It's about shifting from efficiency to transformation.
Jon Herstein (20:09):
Well, I definitely want to talk about roles, but maybe before we do it, let me touch on transformation just really quickly. Are you finding, or in the conversations that you've had, are there industries or even companies that even if you don't want to name names, that's fine, but where you've seen they really get it, right? It's happening much faster here. And if so, why?
Beena Ammanath (20:26):
Yeah. I think it's across industries at this point where the conversation of transformation is coming up more.
Jon Herstein (20:34):
The
Beena Ammanath (20:34):
Old word of re-imagining, completely re-imagine a business process. Thinking of if it's day one, if you're setting up a business today and knowing what AI can do, where agents are headed, how would you set up the business if you are setting up from scratch today? You know you want to build this product. If you know you want to build this product, what are the functions you would need to build this product where you're using AI extensively? So we call it the day one approach. Take a step back. This is what our business is. If you have to do it from scratch, how would we do it knowing what we know today?
Jon Herstein (21:15):
Is this going to favor startups and disruptor companies over established companies because-
Beena Ammanath (21:20):
And on the contrary, I think the established companies have a good handle on their business. The legacy companies actually have been in that for decades. They know exactly what works, what doesn't work from a product perspective, for the end goal. So for them, I think it's easier because they know what their product is, they're confident they've been selling it for years. So I think for legacy companies, that's actually advantages.
Jon Herstein (21:45):
It's a mindset.
Beena Ammanath (21:46):
It's a mindset. It's also trying to pivot a very large ship and that takes a different kind of approach. The challenges they'll run into may not be just so much tech, but more of the culture, the inherent culture of having done things a certain way for decades and now trying to pivot and do it. So I think it's a conversation that you'll see more and more. And I do think it favors legacy companies more than startups.
Jon Herstein (22:14):
So speaking of pivot, then let's pivot to organization, culture, and change because there's a lot I think to unpack there. You even talked about the need for established organizations to even think differently about what departments they have, what role they have, that sort of thing, which is a very difficult thing for particularly larger companies to do. So how are the sort of best in class approaching that problem?
Beena Ammanath (22:36):
Yeah, there's a few different ways. The number one thing I've seen, and honestly, some of the companies have existed for hundreds of years is drive AI fluency, whether it's through AI literacy trainings right from their board to every employee, getting everybody AI literate and AI savvy. It doesn't mean that the board member has to go and start writing algorithms of building agents, but understands the capabilities, understands enough when they read a news article to make sense of, does this apply to my organization? And
Jon Herstein (23:11):
You want to do that all the way down to-
Beena Ammanath (23:13):
All the way down. And we can go a little, because that's one of my favorite topics, AI literacy. It's not just about understanding the value that AI brings, but also the risks that come with it and what's important versus what's not important, what to focus on, especially if you're a board member, you do want to think about that proactively. So AI literacy, the second one I've seen is setting up an AI sandbox internally, a safe space. Safe space. Your employees are not going and trying our tools out in the open and putting out confidential data. Provide them an internal sandbox where they can test out their ideas. And it shouldn't be restricted just to your IT department because the ideas can come from any part of the organization. It could be coming from your marketing department, could be from your engineering product department. So putting AI and making AI tools available to every employee.
Jon Herstein (24:08):
We don't talk a lot about Box on this podcast, but that's one of the reasons we're so excited about the approach that we've taken is that we're doing AI inside of your Box account. So it's already a trusted space. It's already where you have your content. It's already, you've got security and governance and so forth. So we feel like it is a safe space to do exactly what you're
Beena Ammanath (24:24):
Describing. Yeah. And I think it's expected that you would have AI in your box platform because that's the other thing. When I talk about risk, I talk about it's not just the risk models that you might build in- house, it's also the vendors or the tools that you use and what kind of AI they have embedded.
Benna Ammanath (24:44):
Yes.
Beena Ammanath (24:45):
So AI literacy, then having an AI sandbox, making tools available to all your employees so that they can play with it. Like at Deloitte, we have rolled out a tool called Sidekick. It's basically an internal, but it's a sidekick to every employee. They can use it for writing, for brainstorming, and it's internal, it's in a safe environment, secure environment. So providing that sandbox environment drives a lot of the adoption. And then realizing that none of these is just a once and done because there's constantly new advances happening. So providing a pathway towards continuous learning. That's the first step, just this literacy in terms of culture change that removes some of the fear around the hype that you might read in the newspapers and you have no mental model or understanding to translate it into what that really means. And then making sure that you incentivize.
(25:41):
And I have real world examples of having experiences. We had built an AI model and put it on a factory floor where it would want an employee, warn the worker, if the manufacturing equipment was going to heat up and fail in the next five hours. What you realize is this workforce has been operating that machine forever.
(26:07):
They trust their gut more than actually an algorithm that's telling them, "You need to reboot this because it's overheating and it's going to fail." It doesn't. So as a AI team, we were super proud that, oh, this was 99.99% accurate, but none of the workers used it. So they just ignored it would warm. They just ignored the wanting and they just continued. If their gut told them that this is going to fail, then they did it, they just ignored that tool. And so obviously it was a failed project and we needed to fix it. As a tech team, we tend to ignore. And this was when I was working at a manufacturing company and it was a simple change that changed it was we started incentivizing that you would earn points for using this tool and you could exchange it for a hat or some company merge that actually helped drive adoption, believe it or not.
(27:03):
Interesting. So I think incentivizing the reluctant employees to make sure they're using the AI tools actually also helps drive that behavioral change.
Jon Herstein (27:15):
That makes sense. Just like any other thing you're trying to get people to do is find the right incentives. Just going back just for a minute on the ideas of fluency and safe space, are there still companies who are just blocking AI completely? Is that still a
Beena Ammanath (27:27):
Thing? Absolutely not. No, no
Jon Herstein (27:29):
One's
Beena Ammanath (27:29):
Doing it. No, no. No one's doing that. In fact, so in my work at Deloitte, I get to speak to a lot of boards and I've been doing that since the last three years. It's an interesting shift I've seen, Jon . In the beginning, it used to be always about, tell us about the risk, tell us about governance. And I would say in the last 12 months, it's become more about what are other boards doing? What kind of innovation is happening in my industry?
Jon Herstein (27:55):
Are we falling behind?
Beena Ammanath (27:56):
Yeah, are we falling behind? What are my competitors doing? How can we as a board help our executive team innovate more while we worry about the governance? So what kind of innovation is happening? How can we help our executive team to move faster with AI? So there's definitely a shift that I see happening even at the highest level and that's flowing throughout the organization. So I would say the days of should we use AI or not is way behind us or even why should we use AI? There's no more justification needed. I think it is assumed you would use AI or you lose your competitive advantage. I think I haven't even heard that conversation anymore about AI is going to replace me or AI is going to take away job. That's also become more nuanced as you realize there is so much work that still needs to be done when you have AI, it's just different kind of work.
Jon Herstein (28:52):
So let's talk about that because you did talk about organizations looking at departments, the way they're organized, their functions, their roles, et cetera. Clearly things will change, but what is making people more comfortable that it doesn't mean mass job losses? What's the mindset that sort of shifted, do you think?
Beena Ammanath (29:10):
I think what has shifted is the awareness of AI has risen of what AI can do and AI cannot do, has risen en masse. So people are getting more thoughtful about it and realizing there's no way that AI can completely take this over or this is what I will need to do for this AI tool to succeed. So they're getting more advanced in their thinking of thinking beyond of, "Oh, this part of my job is automated, but there is still so much to be done in this aspect." So I think that thinking has evolved and it's actually helping drive the adoption.
Jon Herstein (29:51):
I think the other thing that we're seeing is this idea that there are probably tasks that you just don't even attempt to tackle in a company because Because you think there's no efficient way to do this with people. And so there may be things that like, I wish I was able to do that, but it's just never been cost effective.
Benna Ammanath (30:08):
And
Jon Herstein (30:08):
Now you think, well, actually I can. In our world, you've got thousands or tens of thousands or even millions of documents in your box account. It'd be great if they were all tagged with metadata, but you would never put a good team of people on that unless it was incredibly high value documents for a very specific purpose. You'd never do it across the board, but now you can actually contemplate that. So you're not putting anyone out of work, you're actually doing something that you would never have done before.
Beena Ammanath (30:30):
Yes. You wouldn't invest before time or energy into it, but you can now get it done.
Jon Herstein (30:37):
So you've seen examples of that too?
Beena Ammanath (30:38):
Oh, absolutely. Yeah. Yeah. I think once employees themselves start understanding more about AI and using it, we've seen incredible ideas coming from employees themselves on what to do or getting their hands dirty and start building it. The ones who used to actually fear AI three years ago are now more on board with it. Yeah.
Jon Herstein (31:02):
One of the things that we do here at Box, and I think I've talked about it in the podcast before, is we have a weekly company lunch where everyone
Benna Ammanath (31:09):
Company's
Jon Herstein (31:10):
Invited, we give updates on the business and so forth. And every single week now, pretty much every single week, we have an employee who's showcasing a thing that they did with Box AI.
Beena Ammanath (31:20):
Amazing.
Jon Herstein (31:20):
It could be someone in sales and marketing and recruiting, the HR team, my team, doesn't matter. And it's just showing the power and the capability. And I think when you combine that with the ground up efforts, combine that with top-down initiatives like, oh, we want to go do AI to go do rethink this business process. The power of those two things coming together is pretty incredible.
Beena Ammanath (31:43):
It's amazing. And it almost brings out that competitive spirit of, oh, they did this. Let me try this. Let our team do this.
Jon Herstein (31:51):
It also sparks ideas. It does. It may be a recruiting workflow, but you're like, huh, we do a similar thing over in customer
Beena Ammanath (31:56):
Success.
Jon Herstein (31:56):
It's not recruiting, but it's got the same sort of artifacts and the same sort of process. So we're seeing that all over the place. And it does create a little bit of a competitive spirit
Beena Ammanath (32:04):
Like,
Jon Herstein (32:05):
Oh gosh, the people team's ahead of us.
Beena Ammanath (32:07):
Yes, yes, exactly. And I think Box is one that I've heard about the daily weekly launches, but I've certainly seen companies pushing out whether it's an AI day every quarter, having internal hackathons, like non-tech companies, having hackathons, and not hackathon just for IT or data science folks, but for open to everybody. Or at Deloitte, we have an AI week every quarter where it's the latest in AI is shared or we have cool demos. So there's definitely that, and I think that falls into that continuous learning, the more engagement.
Benna Ammanath (32:45):
Yeah, more fluency.
Beena Ammanath (32:46):
More fluency. And I don't think we had invested in it as much say five years ago, especially on the applied AI side of the world. It was more something, oh, that's something the data science team does or the AI team or the IT team. But now that investment has actually happened. So you see the next level coming up.
Jon Herstein (33:06):
Yeah. It's definitely driving a lot of innovation that I don't think it's going to happen just by doing some top-down initiatives. It's got to be both.
Beena Ammanath (33:13):
Yes. Yes.
Jon Herstein (33:14):
So maybe a slightly less fun topic, but governance, you've touched on this a couple of times. And I'll tell you a little bit of what I've seen in my conversations with customers, but also here on the podcast is that I think largely companies have figured out how to govern when vendors add AI to their product, what's the process that you want to go through to say, "Well, are we okay with that? " And I think probably similar to cloud where the first question was like, "Well, where's my data? And what's happening with my data? Is my data secure? Is my data private?" Those are the exact same questions we got on AI early on, but it feels like it's sort of shifting and moving more to how do you govern agents, not just govern models and where my data is, but what are agents doing in the enterprise.
(33:52):
So would love to hear what the report told you about that, what you're hearing from customers and what your thoughts are.
Beena Ammanath (33:56):
Yeah. And governance has turned out to be a crucial factor in this AI journey. I think it was important in the past as well, but it was considered somebody else's job. But there is now, especially with agents coming in and realizing that there's going to be more and more autonomous work happening, the governing for it becomes really hard. And it's not as simple as putting in the rules and training an employee. How do you make sure your agents are trained on that as well? And especially in an industry like healthcare where you need to be able to tell why a certain decision was made or why a certain diagnosis. If you have an agent diagnosing it, looking at an image of another agent looking at what kind of drug interactions could happen, another agent looking at preexisting conditions,
(34:48):
How do you tie it all together and come to that final outcome? And when you go through, especially through FDA approvals, how are you providing the transparency to each of those agents and what went into that final output? So when I think about governance, I think you have to really go deeper into three areas. One is around the technology itself, building in those guardrails within your technology. For the state of AI report, as you have seen, Jon , we've draw launched a pilot version of the website where you can actually interact real time with an author. So there are four authors on the report and you click on the picture and a real-time author of the author pops up and you can have a discussion about the report. The kind of guardrails report is make sure that that specific LLM is trained only on the report.
Jon Herstein (35:44):
So it's not spewing facts from some unrelated guests.
Beena Ammanath (35:46):
Yes. So if you ask even about the weather or what the giant score is,
(35:51):
The author, which looks like a human, it's just going to say, "I don't know. " So I think thinking about those guardrails proactively, implementing it in your technology as you build it, bringing governance earlier into your ideation and design process versus making it an afterthought, because it's very hard to put in the guardrails once you have the technology built to go and retrofit. That's important. The second is process, whether it is adding additional checks into your development process, acquisition process, making sure you're asking the right questions at each point to make sure those controls are embedded into it. And that does mean a change to the process, right? Whether it's your sourcing process where you're looking at vendors or whether it's your software development process, adding in checks, which is checking for controls around trust issues, around transparency. And then third one, I'll go back to my favorite, literacy.
(36:51):
It's not just the board's job or the compliance team's job. It could be the marketing intern evaluating a marketing tool who has to know what questions to ask and who's the expert to go to if they get red flags because governance is one of those things that needs to be company-wide
(37:10):
For it to succeed.
Jon Herstein (37:11):
Right. Well, part of what I think about too is that you can have a centralized governance committee that's maybe part of your procurement process or whatever that does that initial vetting and says, "Yes, for this use case, we're comfortable with using this tool. We understand how that vendor's going to handle our data and so forth." That's fine. But then now you have the tool and now you have employees starting to build agents and starting to build capabilities and so forth. And then these things start working. How do you manage that? And is that governance or is that just management?
Beena Ammanath (37:40):
It's ongoing. Governance is no longer just at the beginning. In that same case that you described, what if the vendor changes the software? Because there's new technology that new
Benna Ammanath (37:51):
Model- Uses a new model, right?
Beena Ammanath (37:53):
News is a new model. How are you going to do testing for that before you bring it into your employees or does it happen automatically? And you need to put it all into the contract. How do you track for accountability? Especially as you have more agents in, who is accountable? Is it the vendor who did the model upgrade or is it the company that's using it for a use case that the vendor had not really considered? Is it the CIO who approved the acquisition or it just gets complicated and I'm seeing more and more of the accountability conversation come more upfront. There's also this of humans and agents working together.
Jon Herstein (38:34):
Yes. I wanted to ask you about that.
Beena Ammanath (38:35):
Yes. And you hear more and more about human workforce and digital workforce, but really, is it a workforce? How do you want to position it in the business? There is a case to be made that you can think of AI or an agent as workforce because they have to receive the same trainings. They have to be tested for model DK. They might have to be put on performance improvement plans. They might have to be retired. There's a case to be made for that, but at the same time, from a financial perspective, you're not really paying the agents. Is it really a digital workforce or is it a tool or a platform that you need to factor in? And it's different. Governance cannot be thought of. You set up the controls and your job is done. Just like AI and machine learning, as we've seen it, the outputs change over time.
(39:23):
The models learn and evolve and change and governance has to become more agile and be ready. The other important thing to realize is when we use any AI tool today, you're using it based on what you understand about it today. You're putting in the controls, the governance checks based on the risks you know about today.
(39:46):
But over time, as the models get used more and more, there'll be new risks that come up and you have to be able to change those controls and add in those new risk balance. I like to give this example, Jon . It's a quaint example because it's very relatable, but-
Jon Herstein (40:02):
Anything before 2022 probably sounds quaint now.
Beena Ammanath (40:05):
It is. It is. But yeah, it's in the 1900s. Legacy AI. Legacy way back. No. And no, no, no. I'm just comparing it to a moment of time for humanity. Remember when the first car engine was created, right?
Jon Herstein (40:18):
I don't
Beena Ammanath (40:18):
Remember that. Yeah. But when you heard about it, the first car engine, the humans who existed at that time, they were so excited. They just figured out a way to put some wheels and get it from point A to point B, and it was faster than taking on horse carriage. So they started driving it. Obviously the roads were not built. The roads were for horse garages and there was no speed limit and there were tons of accidents,
(40:45):
But it still helped you get from point A to point B faster. So the technology of the car engine is what ... AI is a core technology. That's a lane. Rapid advance is happening because there's massive investment going into it. And the second lane is applied AI, so to speak. Taking what you're building from research labs, from big tech and applying it in the real world, without knowing all the ways it could go wrong, you kind of guess and say, oh yeah, I might need to put an body on it so that I'm protected from rain. A windshield. I need a windshield. Seat belts. Seat belts. So that's the second lane, but you don't know all of that. You don't know.
Benna Ammanath (41:25):
Right.
Beena Ammanath (41:25):
Right. You learn it. And then there's a third lane, which is where really, whether it's the speed limits, the seatbelts, the addressing so that we don't have as many accidents. So we are living in this interesting times where each lane is moving at its own speed. We don't have it all figured out. So just like you have to be ready for your technology and tools to continuously change, you have to be ready for your controls and checks and balances to be able to change.
Jon Herstein (41:54):
I mean, I think a really simple example in the AI space would be the first models that came out couldn't search the web. So
Beena Ammanath (42:01):
You knew
Jon Herstein (42:01):
They could only answer questions based on what they'd been trained on.
Beena Ammanath (42:04):
Yes.
Jon Herstein (42:05):
And so if you build your processes around that core assumption a year later, six months later, that was not true anymore.
Beena Ammanath (42:12):
Yes. But the thing is you can't wait to not use it because you still are getting value from it. Stop using it.
Jon Herstein (42:22):
Right. Well, your point is use it, adopt these things early, but think about fluency, think about literacy, make sure your employees understand it, and then be willing to evolve how you think about things like governance as the models change.
Beena Ammanath (42:34):
Yeah. Yeah. Yeah.
Jon Herstein (42:36):
Got it. Okay. Gosh, so much more we could talk about.
Beena Ammanath (42:39):
Yes. Yes. This great conversation.
Jon Herstein (42:41):
Anything else on governance that you think people ... I mean, and think about from a practical advice standpoint that people should be thinking about that we haven't covered.
Beena Ammanath (42:47):
Yeah. It doesn't matter where you sit in the organization. It is super important to think about the ways it could go wrong. Look, I'm a trained computer scientist. I was not trained to think about how much the code I do can go wrong beyond bugs, how it can have a broader impact on humanity. Trust me, when I studied computer science, I never had the impression that I'm going to be able to be running the world, which is what's happening now. So I think no matter where you sit in the org, it's important to have put some critical thinking into it on what are the broader implications to it. We are way beyond the phase of thinking about the risks or the side effects of using AI afterwards. If you're smart enough to build AI models, you should be smart enough to put in some effort to think about the ways it could be misused, the ways it can cause side effects and put in those guardrails.
Jon Herstein (43:42):
Right. So that's sort of at a company level and a governance level. You have made the point that it needs to be, I think, distributed out throughout so that everyone's thinking about it. But can you also maybe comment a little bit on what do you think this means in terms of at the management level? So if I'm now a manager who's managing a team, not just of humans, but also I've got agents doing work for me, I'm still responsible for the output of my team. I'm still accountable for that. But how am I going to need to think differently about what I do every day, what my responsibilities are, what I need to be thinking about for governance? And again, as practically as you can for people, how should managers be thinking about this?
Beena Ammanath (44:19):
I think as managers, it's even more important to think about governance. You probably have inbuilt checks and balances for employee behavior and employee trainings, make sure they have- Well,
Jon Herstein (44:31):
Handbooks, right? Those things exist.
Beena Ammanath (44:33):
Yes.
Jon Herstein (44:33):
Performance reviews, we have
Beena Ammanath (44:34):
Those things. Yes. Take some of those same processes to start with and apply it to agents and tweak it over time as you learn about it, but work closely with your people's team manager, HR. But the way you manage your human team, you need to start thinking about what are the ways to make sure your agents are performing the same way. They're not going rogue or they're not slacking off or there is a way to continuously monitor it and it's not impacting your goals in a negative way.
Jon Herstein (45:08):
One of the things I like to wrap with is the idea of a few concepts that are really important to me in the world of customer success. And those three are value, culture, and experience. And what I mean by that when I talk about it in a customer success context is value. Are we delivering value for the end user, the customer, et cetera, culture, how do we think about change and getting people to think differently about what they're doing? We've touched on that a bit. And then experiences, how do you deliver a great experience through all of this? So I'd love to just sort of rapid fire a little bit with you on those three concepts.
Benna Ammanath (45:40):
Sure.
Jon Herstein (45:41):
Let's start with value. And I guess maybe the core question is when you're pursuing some of these initiatives and you're thinking about this from an applied AI perspective, how do you ensure you're delivering value and how do you know you've delivered value?
Beena Ammanath (45:54):
And AI makes it complicated, right? It's no longer just about the dollar value or hitting your numbers. It can deliver value in so many different ways. It's crucial as part of your transformation efforts or as you're doing the brainstorming to think about what kind of value do we want to measure? Is it about maybe the ignored work that you talked about that we didn't do because there's no time for it. Maybe it is about the workforce motivation. Maybe it is about ... In a prior survey that we did, there was workforce optimism went up when they got the AI literacy training. It was by 72% or something significant, right? Interesting. Where they felt they were part of the journey.
Jon Herstein (46:39):
Employee sentiment is something that you could measure and-
Beena Ammanath (46:41):
Yeah. So there's so many different ways that you could measure value besides just the dollar number. So I think for AI and as we go more into the agentic AI space, it's really important to take a step back and think about what kind of value metrics do you want to track and measure.
Jon Herstein (46:58):
Do you think it's important to know that before you start or is that something you can sort of figure out as you go?
Beena Ammanath (47:04):
I think you need to think about it before you start. I can guarantee you that what you decide will evolve and change as you start on the journey. There's no single playbook. It's not very well defined. These are the exact metrics. These are what others see. You have a rough idea. So you need to have a north star while you're doing this, but I think that might evolve or change as you go down that journey.
Jon Herstein (47:31):
I mean, I think all of our personal experience with AI is you're pretty constantly surprised by something you weren't expecting, right?
Beena Ammanath (47:38):
Absolutely.
Jon Herstein (47:39):
Mostly in a good way. But to your point, you wouldn't know to measure that before. Even the example of we can now do a process we never would have done before, but you can't measure ROI and something you weren't doing, right? Exactly.
Beena Ammanath (47:50):
Yeah. Yeah. But you should have a north star. I've moved beyond that phase of technology just for the sake of technology.
Benna Ammanath (47:57):
Right, of
Beena Ammanath (47:57):
Course. As a business, as a leader, you need to think about why we even trying to do it. And trust me, it might change completely by the end of the journey. And
Jon Herstein (48:09):
As we talked about earlier, the North Star may start out as being process efficiency, but it really should be much more than that.
Beena Ammanath (48:14):
It should be much more than that. You might start with process efficiency because that's an easy metric and you can ... It
Jon Herstein (48:20):
Gives you
Beena Ammanath (48:21):
Something- Talk to the board
Jon Herstein (48:22):
Of elements.
Beena Ammanath (48:23):
Yes. It gives you the buy-in. It gives you the investment that you need to start, but it can become much more than that if you keep your vision open.
Jon Herstein (48:31):
So now let's talk about culture and change. And what would you say is ... I think I might know what you're going to say, but let's see. What would you say is the most important factor for driving successful change management through an organization in this time with these kinds of capabilities?
Beena Ammanath (48:47):
Yeah, you're right. It's AI literacy. And I've personally experienced it having worked in so many legacy companies. When you are at the tipping edge and you know where this technology is headed, what you're trying to do, and you bring it in and you realize not everybody's with you on the journey. And so I think AI literacy is the number one thing to bring to your team, to bring to the broader organization and not just to your teams, to your leadership and to the board.
Jon Herstein (49:17):
Okay. And then our last topic is experience. And we can think about this both as employee experience and also customer experience. Pragmatic thoughts, advice, tips for folks just thinking about how do you make sure you do this in a way that you're actually providing a great experience to those key stakeholders.
Beena Ammanath (49:32):
I heard a great thing from my teenager a few months ago that it should be so seamless that you don't need a manual or any kind of training. You don't need any instructions. It should be whatever you build should be just intuitive. That's the experience you want, right?
Jon Herstein (49:50):
Sounds like someone who's going to be a product manager one day.
Beena Ammanath (49:53):
I hope so.
Jon Herstein (49:55):
That's great advice. That's certainly something to strive for that it's just part of how you do things and it's not this, "Oh, I've got to go do AI now."
Beena Ammanath (50:02):
Yes. People should not even know there's AI under the hood or it should not be called a digital tool or AI tool. It's just, here's what you use to do this.
Jon Herstein (50:12):
So I have a little bit of a prediction on that front so we can compare notes, see what you think. Right now, if you look at software products, the AI is very explicit. There's a button that says AI, ask AI, whatever. And my prediction is not too far in the future, of course there's AI in here. So why would you call it out? It's just if this is the summarization feature, you don't have to say AI summarization, you just say summarize, right? Yes.
Beena Ammanath (50:38):
Do you agree? I think by the end of 2026, that'll be gone.
Jon Herstein (50:42):
Okay.
Beena Ammanath (50:42):
I think it'll not be there anymore.
Jon Herstein (50:45):
Let's check back next January.
Beena Ammanath (50:46):
Yes, let's do that.
Jon Herstein (50:47):
It would be great to talk to you again. This was, I think, an amazing conversation. Very, very rich, very insightful. I learned a lot. Hopefully all of you learned a lot. The report is available. Anyone can come to it. We'll have a link in the show notes. Go talk to Beena's avatar
Beena Ammanath (51:02):
As
Jon Herstein (51:02):
Part of that. That'll be tons of fun. And yeah, we really appreciate you being here.
Beena Ammanath (51:06):
Thank you so much for having me, Jon . This was great.
Jon Herstein (51:08):
I know I learned a lot and I know our audience did as well. What stands out to me is that the future of the enterprise isn't just about deploying smarter technology. It's about leading organizations that are ready for autonomy, speed, and change. AI agents represent a new operating model, one that requires intentional governance, strong leadership, and a clear understanding of where humans and machines each add the most value. Thank you to our listeners for tuning into the AI First Podcast. If you found today's conversation valuable, be sure to subscribe and we'll see you next time as we continue exploring how AI is reshaping the enterprise responsibly and at scale. Thanks for tuning into the AI First Podcast, where we go beyond the buzz and into the real conversations shaping the future of work. If today's discussion helped you rethink how your organization can lead with AI, be sure to subscribe and share this episode with fellow tech leaders.
(52:04):
Until next time, keep challenging assumptions, stay curious and lead boldly into the AI first era.