AI-First Podcast

The AI race isn’t about who starts first, it’s about who gets value fastest.

In this episode of the AI First Podcast, Box Chief Customer Officer Jon Herstein sits down with Box CIO Ravi Malick to zoom in on the biggest insights from a year of conversations with forward-thinking CIOs and technology leaders.

They reflect on what it really takes to move from AI hype to impact, from laying the groundwork with structured content and scalable governance, to building trust, navigating cultural resistance, and measuring value in a way your CFO will actually buy into.

You’ll hear sharp insights from guests across industries including real estate, healthcare, financial services, and higher ed,  all focused on putting AI to work across content and knowledge workflows.

Key moments:
00:00) Why AI can’t replace human connection and why it shouldn’t
(01:26) Ravi Malick returns to recap the biggest lessons from 2025
(02:23) 2026 is the year to go from pilots to scalable AI
(03:29) 2025’s AI acceleration: funding, models, milestones
(04:29) The fragmentation of strengths across AI providers
(05:29) Consolidated content as the foundation for AI
(08:38) Unlocking unstructured data with metadata and governance
(10:13) Rethinking IT’s role: from gatekeeping to enabling innovation
(12:14) Why guardrails beat gates
(14:21) Shifting from top-down control to bottom-up experimentation
(17:01) Extensible governance models for fast-moving environments
(18:42) The importance of timing, agility, and building AI foundations
(19:32) Why sandboxes matter and how to balance freedom and safety
(20:55) Personal agents vs. hero apps - and what will win
(23:10) Empowering employees through education, not restrictions
(26:07) Cultural shifts: trust, experimentation, and collective learning
(27:31) Why change management is the hardest part of AI adoption
(29:31) Making AI feel like an unlock and not a threat
(31:19) How peer examples and storytelling drive real usage
(33:59) CIOs modeling behavior to build confidence in AI
(38:27) Choosing where not to use AI and keeping it human
(40:30) Measuring impact: outcomes > usage metrics
(41:48) Why ROI must be visible in the balance sheet
(44:05) Real-world impact: AI driving donations and business value
(45:22) Hard questions CIOs need to ask before scaling AI
(47:29) When custom AI platforms fall short and why
(48:21) Scaling what works: from experiments to enterprise-wide rollout
(49:33) Reengineering processes, not just automating tasks
(51:04) AI reliability: from 70% confidence to 90%+ precision
(52:25) The fast lane, the slow lane and how to choose your pace
(52:51) What’s next: AI that teaches you how to use it
(54:28) 2026 predictions: governance, commoditization, and agent adoption
(57:40) Audience Q&A: emotional intelligence in the AI age

What is AI-First Podcast?

AI is changing how we work, but the real breakthroughs come when organizations rethink their entire foundation.

This is AI-First, where Box Chief Customer Officer Jon Herstein talks with the CIOs and tech leaders building smarter, faster, more adaptive organizations. These aren’t surface-level conversations, and AI-first isn’t just hype. This is where customer success meets IT leadership, and where experience, culture, and value converge.

If you’re leading digital strategy, IT, or transformation efforts, this show will help you take meaningful steps from AI-aware to AI-first.

Ravi Malick (00:00:00):
AI will not replace the human connection. And I think as we said earlier, in some cases, it may help amplify that or reinforce it. AI can't replace that. It might be able to augment how I learn or augment my process, but that's unique to me. And I think we still have to remember that the uniqueness of humans, the difference in humans, understand that, celebrate it, recognize that as strengths, and how do you use AI to amplify those

Jon Herstein (00:00:29):
Things? This is the AI First Podcast, hosted by me, Jon Herstein, Chief Customer Officer at Box. Join me for real conversations with CIOs and tech leaders about re-imagining work with the power of content and intelligence and putting AI at the core of enterprise transformation. Hello, everyone, and welcome back to AI First, the podcast where we go beyond the buzz and into the real world of leading in an AI first world. I'm Jon Herstein, and today we're doing something a little different. Over the past year, we've had the privilege of sitting down with 12 remarkable technology leaders, CIOs, CTOs, and digital innovators who are all wrestling with the same big question. How do you turn AI from hype into a real durable value? Our guests have joined us from across industries, including healthcare, financial services, higher education, real estate, professional services, the nonprofit world, and technology.

(00:01:26):
They've shared their wins, their missteps, the hard conversations they've had to have with their boards and their CEOs, and the very human questions their teams are asking as AI changes how work gets done. My guest today is a return guest, Box CIO, Ravi Malick. We're going to pull together the biggest themes that emerged this year. The foundations that made AI success in 2025 possible, the governance models that enabled experimentation instead of blocking it, the change management playbooks that actually drove adoption, and how leaders are really measuring value beyond time saved, and what all of that means for 2026 as we move from pilots to scalable production AI. You'll hear impactful quotes from all 11 conversations, short, punchy moments where our guests crystallize something every technology leader needs to hear heading into this new year. We've also taken everything we learned and turned it into a companion resource.

(00:02:23):
It's our 2026 AI playbook for CIOs. Think of this episode as the highlight reel and the playbook as the field manual you can take back to your leadership team. You can find it at blog.box.com. Whether you're just starting your AI journey or you've got dozens of initiatives in flight, this conversation is designed to give you clarity, language you can use with your stakeholders and practical ideas you can put to work as you kick off the new year. But first, let me quickly summarize if that's even possible all that happened in 2025. At least 27 major model releases, over $80 billion in capital raised. The Pentagon wrote $800 million in checks to OpenAI, Anthropic, Google, and XAI. GPT-5 hit 94.6 in tests on advanced math. Claude Sonnet 4.5 reached 82% on coding benchmarks. Companies like Hertz reported five times faster development. GemNi3 became the first model to break 1501 on LM Arena, and Llama four went fully open with a 10 million token context window.

(00:03:29):
In the background, DeepSeak and China trained a 671 billion parameter model for under $6 million, proving you don't need unlimited capital to compete and open sourced it. The pattern that we're seeing is this isn't just a winner take all market. Different providers are owning different capabilities. OpenAI, for example, has brand and reasoning, Anthropic has coding. Google has multimodal. Meta has open source momentum. Amazon and IBM have focused on enterprise integration. We saw context windows hit anywhere from two to 10 million tokens. Costs dropped 60 to 90% while performance actually improved. And Agentic AI importantly moved from demo to production. The top technology works. It's reliable, capable, and economically viable at scale. The question shifted from, can AI do this to how do our organizations deploy it and get value from it? So that's the background, that's the landscape. And let's dive right in. We're going to talk about what we heard from our 12 tech leaders and what it means for how you lead in 2026.

(00:04:29):
So Ravi, welcome. How are you?

Ravi Malick (00:04:33):
I'm doing well, John. Thank you for having me back. I appreciate it. I'm looking forward to this conversation after this warp speed of a year in AI.

Jon Herstein (00:04:43):
It's definitely been warp speed, maybe even something beyond warp speed. But let me start or let us start with the foundations that have made progress in AI possible. And I would say that every guest that we spoke with this year, without exception, emphasized that AI success didn't start with AI. It actually started with decisions that they had made years earlier about where their content lives, how it's organized, and who has access to it. And the organizations that move fastest with AI in 2025 were the ones who'd already consolidated their content into unified platforms. And I'm going to start with a quote from Philip Irby from American Homes, AMH, who said the whole time, this is a direct quote from Philip. "The whole time I'm like, you guys don't understand. Once we have it all in one place, then we can realize the value of AI.

(00:05:29):
"And now with what's happening with AI, I feel like a profit, which is pretty awesome. You know Philip really well. He is definitely an innovator. I don't know if he's a profit, an innovator.

Ravi Malick (00:05:40):
He is. Yeah. Philip is definitely one of those guys where it's like 10% of the people understand what he's doing and then the rest of the 90% are like, " Wait, what? What's going on? "But it turns out that he actually may have been a profit in that sense. I certainly feel in our case, because we run Box on Box, that we have gotten a significant headstart foundationally relative to other folks. We don't have to worry about where our content is going to be and trying to aggregate it and that it lives in 30, 40 different places. What we're focused on right now is making sure that that content, which is the context for everything that we're going to do, AI related, is curated. So we're already thinking about how do you operationalize the ongoing management of that content to make sure that it's fresh and relevant for what agents are going to be doing in our environment.

Jon Herstein (00:06:35):
Yeah. And this idea of this profit moment where it's years and years of work, and in this case, consolidation of unstructured content and so forth, it feels like it suddenly pays off, but it actually was years in the making was definitely a theme we saw across industries. In fact, Graham Link, who leads technology over at November, said something very similar. He said," All of our data, all of our content in one place, and now we can start to leverage AI tools on top of that, which has been a huge benefit for us.

Ravi Malick (00:07:04):
"Yeah, for sure. I certainly see CIOs contemplating in some cases struggling with in terms of how to scale AI and how they get to that. And what they're realizing is that it is highly dependent on the content, on the unstructured data. Most places have really, I'd say, relatively good discipline on structured data and managing it, the access, how it's utilized, but the unstructured side has been honestly somewhat neglected. It's been put to the side. Well, I think we've always known the importance of unstructured data and content and how it really drives the engine of the enterprise. It's always the glue of human middleware. It kind of fuels human middleware that stitches complex environments and applications together to produce end-to-end business processes. It's having its moment now because people are starting to focus on it. They're starting to look at it and understand, okay, my core system does X, but it's being supported by Y and Y is effectively all these manual processes that are codified in unstructured data and the thoughts and then it's the presentations and the strategies and the board decks and then the operating manuals and the customer notes and all of the things that effectively make up the foundation of how a company runs, that's all unstructured data.

(00:08:38):
And it is absolutely the key ingredient for successful AI and being able to measure the outcomes of AI and really not necessarily measure the outcomes of AI, but make sure that AI is producing the right outcomes and driving the right things. Because as I mentioned earlier, that context is so incredibly important. And we've seen this even ourselves as we have been on this journey and just how important the context and it being accurate and fresh and not encumbered by age, so older versions of things in terms of producing the right answer and the right outcome.

Jon Herstein (00:09:18):
And I think one of the more powerful things that we're seeing is the bringing together of unstructured and structured

(00:09:24):
And AI really allowing that. And just one of our guests, Robert Antonet, Vernado, kind of made the point about how structured metadata on top of unstructured documents is really the key to unlocking AI's potential for very complex queries. And he talked about building metadata templates with numerous fields to make lease documents in his specific case, truly searchable and analyzable by AI, which you really couldn't do before, or you're relying on human created metadata, which is problematic in all sorts of ways. And that level of intentionality about data structure is really what separates organizations that are ready for AI and getting value from the unstructured content from those that aren't. I'll give you his specific quote on this. He said, being able to have them in a place that you can organize them, create a hub and then put a good AI engine on top of it will give back many, many hours to workers, in his case, knowledge workers in his business.

(00:10:13):
So what's your take on that in terms of what you see with our customers and also how Box is approaching this?

Ravi Malick (00:10:19):
Yeah, I think metadata is also having its kind of moment in the spotlight because I think it had been isolated to pockets because metadata's hard. It's hard to extract it at scale and populate at scale. And I don't know about you, John, but when was the last time you populated the metadata for a PowerPoint presentation?

Jon Herstein (00:10:39):
Most weekends for me.

Ravi Malick (00:10:41):
Well, it's a habit that people want to get in, they want to work, right? They're focused on what's actually inside the content, not the metadata side, because quite frankly, we haven't been trained that way. The value of it has been elusive, but now with AI, you can do a few things. One, you can extract it at scale, which is certainly something that we are implementing and leveraging as part of our contract lifecycle management process. Then you can use it to drive workflows and you can use it. It'll provide some of the input into our semantic layers in terms of being able to translate how we work as a company and how data is classified and managed at Box and provide that information and provide that additional context to agents that are doing work within our environment. We're definitely seeing the importance of it. I think in the past, metadata has largely been relegated to kind of ECM era projects.

(00:11:46):
So enterprise content management, very specific, discrete areas of the business, often compliance or regulatory oriented in terms of process workflow and being able to audit what's in that process, the inputs and outputs of that. Unfortunately, it hasn't been available. The technology just hasn't been there to make it available to the wider population and now it is. AI gives us that capability and it's incredibly powerful.

Jon Herstein (00:12:14):
Yeah. And there's obviously still work to do. It's not like you snap your fingers and magically all your content is tagged and it's got metadata and so forth. There's a process you've got to go through, but it's clearly becoming table stakes for getting values that your content's organized, structured, and has metadata tagging and classification around it. Now let's assume you've done that or you're in the process of doing that. The next thing we saw is sort of a separation between leaders and others is how they think about governance around AI. And I would say the core theme that I heard was thinking about governance not as a blocker, but more as an enabler. And this was an encouraging pattern that I saw. The old model of IT is the place where you get no, is giving way to something that's a bit more nuanced, which is that the best governance frameworks that I heard about were creating guardrails as opposed to gates.

(00:13:01):
Does that resonate with you and are you thinking about in your role and what have you heard from customers?

Ravi Malick (00:13:06):
100%. And John, I'll overlook the fact that you said IT is the no department.

Jon Herstein (00:13:12):
In the old days, Robbie, the old day. Other CIOs.

Ravi Malick (00:13:17):
No, it is. My own point of view on IT has always been as an enabler and it is how do we provide the guardrails? And honestly, how do we make sure that the brake system works well so that we can go fast and we can go fast in a controlled way. And I think that is the nature of AI because it is so pervasive, it's on the consumer side and we see this time and time again, what people use in their lives outside of work, they have an expectation that they can do the same things at work. And so just the fact that it took off so quickly there, I think accelerated the need to figure out how do you say yes, but instead of just no.That's the approach that we've taken in our sort of let a thousand flowers bloom and understand how people are using it, observe, ask questions, surface the use cases that we think we can scale beyond one or two users and really start to understand how the way that the technology's being used and how people are adopting it.

(00:14:21):
One of the biggest challenges that we've had, I think we'll continue to have, and you'll continue to see is getting people to think differently about it. The tech is going to continue to evolve. I mean, the tech will, I think certainly in the next several years is going to evolve, is going to mature. It's going to fill the gaps. The tech typically isn't the issue. We can get it to do what we ultimately want it to do. It's like how do you optimize? How do you make sure that you're finding the areas that people are power users and really kind of taking full advantage and how do you find the people that are slower to adopt and close those gaps? The concept for CIOs is the idea that you have to go out and you have to communicate and you have to talk and educate people as part of this governance effort, I think is incredibly important.

(00:15:16):
And yeah, I think it is outside of a few industries, I don't want to discount the fact that there are some industries that have some real need to have some stronger governance, financial services, critical infrastructure suppliers, things of that nature. I think that they're in areas and situations where they really do have to have some stronger kind of gates versus lanes. But I think even that's just, again, a process of legislation and regulation and regulators maturing their understanding of AI, understanding how it works, and being able to have the kinds of information like observability and auditability to ensure that the technology's not going awry.

Jon Herstein (00:16:01):
Yeah. There's a great quote from Kamalbador from University of Chicago on this topic where he said, "It's easy to get swept up in the excitement of these things and spray your data all over the place, but we at Chicago are more careful than that. " And he said, "Think about platform. Think about how these things connect to each other, how these things can connect to new things as they come up and become extensible." And I think this idea of extensibility is really critical because clearly the AI landscape is moving so fast that any governance framework you put in place today has got to be adaptable. You can't lock yourself rigid policies.

Ravi Malick (00:16:33):
Don't fall in love with a policy or a roadmap.

Jon Herstein (00:16:38):
Right, absolutely. Well, actually, John Allen from Baylor made exactly the same point. I don't know if it's coincidental, but they're both in the university space, but he gave a great example. He said, "We put our guideline in place and we're already looking to review that again because the space changes so quickly." He said, "What may have been things that people were really uncomfortable with from a usage perspective 12 months ago, well, that needle may have moved."

Ravi Malick (00:17:01):
For sure. I think the theme in IT and technology in general has always been change is constant. And I think what is ... Kamal makes really good points that I would encapsulate in the concept of understanding your timing and an analogy of that is understand what lane you need to be in on the highway. Because I think honestly, there are a lot of companies that you don't necessarily need to be in the fast lane. You can be in the middle lane or even one of the outer lanes and not having to go as fast and kind of watching the traffic pattern and seeing how things shape up and develop. Because particularly in the model side, the model world just changes so rapidly. New models are coming out weekly with new capabilities and those capabilities, they're not just small turns of the dial. I mean, they are material improvements in areas.

(00:17:55):
As you mentioned at the beginning of the podcast, the model makers have focus areas where they have noticeable strengths and differentiators from other model makers. As a CIO and as a company, I need to figure out, okay, where do I need that speed? Where do I need that agility and be able to move quickly? And where can I sit back? Where can I think more pragmatically about this? Where can I be deliberate about my architecture and understand the use cases and really kind of focus on the planning, the building blocks, the foundation to where when I'm ready to go and I understand how I'm going to incorporate things, I can then move fast. It's almost kind of a concept of going slow to eventually go really, really fast.

Jon Herstein (00:18:42):
Well, and you probably need to be doing these things in parallel too, right? Mike Pepper, Stanford Health, he talked about creating a secure sandbox with over 15 models where anyone in the organization could come and experiment so that he's not dictating what they could be used for. But he literally said, "Go have fun, learn, experiment, and then develop products based on how people are actually using the tools."

Ravi Malick (00:19:03):
Yeah. I think that I've seen time and time again that companies that embrace that kind of approach and don't try and force their employees into a single lane using a single tool or using a single model, I think are seeing the benefits of that. They are ahead in the game. I think they have better understanding, they have better knowledge. They've understood what they need to educate their employees on. I think they're better informed on what they need to govern, what the actual risks are. So I'm highly supportive of that kind of approach.

Jon Herstein (00:19:32):
And it's pretty powerful inversion of at least some of the older models of how IT worked with the business. So instead of IT deciding what IT should and shouldn't do and pushing it out to users and then trying to get them to use it, it's actually flipped around where we're letting users discover value and then we're productizing what works. And it's not either or. I think we're seeing both, right? Certainly at Box, we're seeing both models, top-down initiatives, but also what are people finding and using in their daily life that we then bring to the bar organization?

Ravi Malick (00:20:01):
The approach that we initially took was to let people experiment, let a thousand flowers bloom. But at some point you have to start curating that, right? You have to start tending the garden and making sure that it's growing in the right areas and maybe you want different colors. And so that's where we are at now. And so we are, I would say, not necessarily a completely top-down approach, but we're trying to meet in the middle where we do have some centralized structure. So an operating council that ensures strategic cohesion and the sharing of use cases and making sure that we're identifying similar use cases across the business. And so combining that with still the approach of what is commonly called as citizen development and enabling people to surface their own use cases. We know that we want to focus on what we're calling hero agents.

(00:20:55):
So these agents that can scale where we have not just three or four users, but maybe 50 or 100 or 200, an entire business unit or the entire company can leverage a single agent or set of agents. At the same time, though we know that how people do their work and where they are can be somewhat unique and personalized. And so we still want to give them the power to figure out like, "Hey, I've got these 10 tasks that I do every day. It's kind of unique to my job. I know we're not going to build a Hero agent. So hey, I'd love to be able to enable that person to build an agent that can do that for them or leverage AI that can do that for them in some way so that we're still capturing the productivity gains at an individual level, but also at the enterprise level."

Jon Herstein (00:21:40):
Yeah. But all of this requires trust, trust in your people, trust in your guardrail and trust that this experimentation is going to actually lead to something useful, yielding insights that maybe you as the CIO couldn't have even predicted. Mike actually went on to say AI is not a team or a person. Everybody in technology needs to understand AI. It's a tool that everybody needs to understand and embrace and use. You agree? 100%.

Ravi Malick (00:22:06):
It's very similar to the cybersecurity approach where cyber is not an IT problem or a CISO problem, it's an enterprise problem. I think AI is an enterprise opportunity. Everybody needs to understand it. Everybody needs to think about how it's going to improve their work life balance, how it's going to unlock capacity and unlock, maybe even unlock new abilities. We've certainly seen new job creation as a result of AI, things that we weren't contemplating in the past like prompt engineering and even more recently like context management, everybody needs to figure it out. It needs to be a part of what they do each and every day. I certainly see the younger generation is already ... The next wave that's going to be entering in the workplace, they're already kind of AI native. I have a senior in high school, I don't think he does anything without using AI in some way, shape, or form.

(00:23:10):
It's already ingrained in how he does work. It is very much ingrained. So we already know that it's going to be that way. Everybody's going to be using it.

Jon Herstein (00:23:19):
Given that, and whether people are AI natives are not for the most part yet entering the workforce, but I think all of us are getting very comfortable with this idea. How has Box and how have you thought about balancing that speed and that adoption with security when you're rolling out these AI capabilities, thinking about how do you negotiate security and privacy terms with vendors before you turn features on? What should people be thinking about in that dimension?

Ravi Malick (00:23:45):
Yeah, love that question because I think a lot of times we can get caught up in the possibilities and the-

Jon Herstein (00:23:53):
Excitement.

Ravi Malick (00:23:54):
Yeah, you get caught up in the excitement and the possibilities of AI. But it also has, you want to call it a dark side, right? A dark side with personality. That's the security. And when we were first starting this journey, we knew that we needed to have some level of observability. We knew like, okay, we need to see who is accessing what and make sure that we understand what it is, it's adhering to our policies. So we employed this kind of regular review of basically the network looking for AI tools. And there were situations where we blocked things, right? So we kind of took the white list approach and what we were comfortable with and uncomfortable with, but what platforms are people or what applications are people are accessing? That's very quickly transitioning to, okay, how do we think about security for agents? And I think we're fortunate and I'm certainly very, very fortunate in that most of what we're going to do is going to be run on box.

(00:24:57):
And so how we think about security for agents with Shield and just our architecture is incredibly helpful. But my world will not solely exist of box agents. It might be the majority, but I will have to make sure that other agents are secured the same way. And so we're looking at, okay, how do you make sure that an agent has the right access, that somebody hasn't been over provisioned and this agent is acting on their behalf and they actually have more access than what was assumed when the agent was built. How do you make sure that an agent stays in its lane? How do you make sure that it has the observability and the auditability for compliance, for SOX compliance? I mean, there are some of the areas that we've identified as kind of high energy, high repeatability, some high level, mid to high level of critical thinking is in our compliance processes.

(00:25:51):
But we need to make sure that as a result of deploying AI there, we are still compliant. We still have all of the ... We can check all the boxes that we need to, when a human's doing it. So it's the security, it's the governance, it's the compliance, right? All of these things are rapidly becoming incredibly important.

Jon Herstein (00:26:07):
And what's interesting is that even though there's all this excitement and people figured out interesting ways to leverage these tools maybe in their personal lives, planning a kid's birthday party or whatever it is, it's actually been a little bit harder to get people to use these tools at work in a useful way. And I think what we've learned is that that's really less about the technology itself and it's more about just core human attributes. And I'd say if there's one thing that every single one of our guests agreed on is this, that rolling out the tools is the easy part, but getting people to actually use them and use them well, that's where the real work happens. We heard about anxiety, skepticism, and even resistance. And so it's a challenging area, but we did hear about some strategies that work. And I want to kind of go into that in that area now.

(00:26:49):
I want to start with a quote from Matt Lightsen from IBM who said, "There's some people that are all gung-ho and some people that quite frankly have a lot of anxiety about what this is going to mean to them personally, their roles, maybe even their careers." And he said, "I think the real answer is that none of us really knows." And so it's a legitimate thing. People are concerned. So I'm sort of curious, how do you think about asking people to adopt these tools in a way that might truly fundamentally change their jobs and we can't even tell them exactly how, right? What do you think?

Ravi Malick (00:27:23):
Yeah, it is. Well, I'll tell you, if I had the perfect answer for that, I might be sitting on a beach

Jon Herstein (00:27:31):
In a cold drip. You'd have written the book by now. And I'd have written the book. Yeah.

Ravi Malick (00:27:35):
There's an interesting dynamic of this. And one of the things that is, I think, unique to AI relative to some of the other technology that we've seen, and maybe somewhat similar to mobile, the web in that it does cut across every part or has the ability to cut across every part of the company. It's horizontal as much as it is vertical. It can impact everybody. And the change management of that is incredibly important and incredibly powerful in terms of adoption and getting people to buy in. It is by far, in my opinion, the biggest hurdle for most companies. It is by far the biggest hurdle, even for us as a technology company. Early days for us, we even experienced some of that, right? There was some fear. I think the one thing that we did a really good job of was articulating that for us, this is an unlock.

(00:28:33):
Because there are unknowns about it in the long term, we want to understand how this is going to unlock the ability for us to do things that maybe we haven't done before, unlock the ability for us to grow in ways that we haven't grown before. It is not being viewed as a cost cutting or purely a cost management approach. I think that was incredibly important, one, to come out of the gates to say that, but two, to double down and keep communicating that and articulating our approach and our strategy to this. I think that, look, I mean, there's no sort of denying that at some point that it is going to shift. Jobs are going to change. Jobs that exist today may not exist in the next two years, in the next three years, the next five years.That is how technology works. I think making sure that people understood that was not what we're trying to accomplish out of the gate, that that was going to be an evolution, not a revolution was incredibly important.

(00:29:31):
The people dynamic, I think it will make or break some companies in terms of their ability to effectively leverage AI. You have to talk to folks. I think there was this initial perception because of all the hype, because of how fast things were moving, that this was going to just kind of permeate and change things overnight, and that's not the case, right? I mean, there's still the promise and the excitement for us around the agent ethic workforce is probably higher than ever before than anything else that we've done. But at the same time, that is going to take effort, that's going to take work. We're a decent sized company, multinational, global, but think about companies that have 100,000 people, 150,000 people. I mean, that change is going to take much longer. It's making sure that people understand what that horizon looks like, how it's further away than what might be articulated in the press and the media, I think is also important that yes, we know we need to leverage technology, but we're going to be very deliberate about it.

Jon Herstein (00:30:38):
Well, and there's a very optimistic take on this too. And I loved a line from John Allen from Baylor who said, how can we make sure that people recognize that AI is actually an enabler to greater human interaction? It's not a detractor from human interaction. And he said, "What I see is here's the rise of the ability for us to turn away from the screen and enable people and interact with people again."

Ravi Malick (00:31:02):
I think a lot of times we think about how can we, as people do more work, and maybe it's actually about how do we correct and do less work and focus more on the human interaction and the relationships that are meaningful both professionally and personally.

Jon Herstein (00:31:19):
You've got your AI handling things like documentation, summarization, the very routine tasks that people are doing today, you get to spend more time actually talking to people. And we heard this also in Stanford Health where they said things like just something as simple as an AI taking notes for a doctor so that the doctor's looking at the patient, not looking at a chart or looking at an iPad or something to take notes is a huge, huge unlock for just doing a better job.

Ravi Malick (00:31:48):
For sure. And for trust, the idea that AI can repair the trust that I think has eroded over the last several years.

Jon Herstein (00:31:57):
Totally agree. Now let's go a little bit deeper on the change management side because there's a lot of questions about how do you actually get people to do something different.This is a universal theme in technology. It's nothing unique to AI. But what I heard over and over again was that the most effective adoption strategies relied on peer advocacy rather than IT mandates. So I know Robbie, as CIO, you telling people to do something is very important, but let me just give you one quote. Amel from Wytham said, "Word of mouth from someone that does what you do goes beyond any marketing message that I as CIO could possibly put out. " What

Ravi Malick (00:32:31):
Do you think? 100%. This applies, I think, to any person in a leadership role, right? You have to be ultra selective when you are using a top down approach because it works in very narrow situations. People remember it. Yeah, it is the word of mouth, the I see somebody else doing it. So the self-selection to opt in the kind of, even to some degree, developing FOMO and the folks who have this and can use it and they're getting new opportunities or they're enjoying life more. Those kinds of things and highlighting those sort of qualitative benefits that don't necessarily, that might not be captured in business metrics,

Jon Herstein (00:33:22):
I

Ravi Malick (00:33:22):
Think generates a lot of interest. I think the human nature is to want to be part of something and ideally part of something interesting and fun and can provide some credit and maybe economic benefit and upward mobility.

Jon Herstein (00:33:37):
Or even just make the job itself more enjoyable or easier or you more productive in it. Karthik Devarajan, who's at the American Humane Society said, as he talked about his change management strategy, he said, "We don't show them the features of AI. We take a very real example from within the organization and we have the person from the program be on the call and be our advocate for it.

Ravi Malick (00:33:59):
" No better approach. And this is what we're doing and this is what we found has increased the engagement and increased the adoption, increased the amount of demand at Box is that we do this on our Friday lunches, we do this in our senior leadership meetings, it is pervasive and we have the person who developed the use case, who in some cases, just recently we had our leader of internal audit who developed some agents to help with conducting audits. And in taking the information, drafting the report, we had that person present in a leadership meeting and in an all hands. That is very powerful. I think when people see that there are other folks doing this and they have ideas and they're being, I think even more importantly, they're being given the authority or they're being empowered with the ability to go put those ideas into place, that is incredibly powerful.

(00:34:55):
And that last point, a lot of times you will see the grassroots like, "Hey, we're going to source innovation." And then so you get all these ideas and they don't go anywhere. So I think it's incredibly important to make sure that, yeah, you know what? Not only do we want to see this, but here you go, we want to put these in place where we feel like it's going to benefit not just the individual, but also the team, the company.

Jon Herstein (00:35:19):
Yeah. Well, and you gave the example of us showcasing these. This is a box example, but us showcasing the work that's being done in this area by people in the business every week in our Friday lunch. And that's a really important part of our culture at Box is that we have a company-wide Friday lunch every single week. And it's really struck me that what's important here is that driving adoption also has to fit the organizational culture. You can't do something that feels like completely orthogonal to the way you normally engage. A great example of this came from Graham Link from November and he talked about their mustache generator, which you can actually check out on their website. But he said, "November's always been about fun. That's part of our DNA. So when we bring in new technology, it has to fit that culture." And he said the great quote, "The mustache generator wasn't just a tech project, it was a Movember project."

Ravi Malick (00:36:09):
And just quick shout out to Graham November, we're big Movember participants here at Box and have been for a long time. It's a very insightful take on the technology and what you're doing with technology and particularly how you're showcasing it, being consistent with the culture, right? Because the way that we do things at Box might not land someplace else. But I think again, getting back to though the idea of communication, how you communicate should fit with the company, but the idea that you need to communicate and you need to engage with people on a person to person basis, I think is still a major theme and a key variable in success.

Jon Herstein (00:36:52):
Yeah. I think there's another element too, Ravi, which is if it feels like this is being imposed upon people from either outside or from above, adoption, it's just going to struggle. People don't want to be told you have to do this. But if it feels like something that is natural to either our culture or the way we work, then it's more likely to take off. And Philip Erby at American Homes said people aren't going to get replaced by AI, but they might get replaced by people that understand AI and leverage it. And if people internalize that and understand it as opposed to being told, it has to be that way, I think it'll make a big difference.

Ravi Malick (00:37:29):
Yeah. I'll add to that. I think something that differentiates us and that what I love about Box is that our leadership demonstrates the behaviors and the desire to lean in on this technology from the top down. Aaron has been very engaged on this and I think in some ways he probably engages in a way that is probably different than every other CEO, but that's not to say that as a CEO, as a leader, you can't engage in similar ways. You can't take some of that and say, and demonstrate the behavior and lead by example and show that, "Hey, look, this is how I use the technology. Here's how I think about it. " People follow folks that are demonstrating the concepts and the ideas and the behavior that they're asking others to do. Well,

Jon Herstein (00:38:17):
And there's another thing that I've seen both from Aaron, but also Graham talked about this at November, which is this idea of being deliberate and intentional about what you don't use AI for. So

(00:38:27):
In Graham's case, he said it's a very human organization. And so there's certain things around things like messaging. We're not going to use AI to write messaging for us because it wouldn't feel authentic to who we are. And Aaron said something very similar about he does his writing himself. So if he's writing an email or a blog post or a tweet, the act of writing is what helps him think and what helps him get to something useful as opposed to thinking, "Well, I can just get AI to do that. " So being thoughtful about that

Ravi Malick (00:38:55):
Too. Yeah. No, that's good. And I think one of the important things that I certainly remind myself of and I hear from others as well is that AI will not replace the human connection. And I think as we said earlier, in some cases, it may help amplify that or reinforce it. But I think those are great examples of sometimes where I still like to take notes with a pencil on a piece of paper. That's my process. Yeah,

Jon Herstein (00:39:22):
Exactly.

Ravi Malick (00:39:23):
And AI can't replace that. It might be able to augment how I learn or augment my process, but that's unique to me. And I think we still have to remember that the uniqueness of humans, the difference in humans, understand that, celebrate it, recognize that as strengths, and how do you use AI to amplify those things?

Jon Herstein (00:39:46):
Totally agree. Now, we've talked a lot about adoption. We've talked about culture. We've talked about the fun stuff, Robbie, but now we need to also talk about some stuff that may be a little less fun. Okay? You ready for this? I mean, at the end of the day, we are talking about AI as a tool, right? A tool for businesses to be more efficient, get more work done, produce more value, et cetera. So that really brings us to talking about measurement. And we had some really honest conversations with our guests about what ROI really means. Every single person that I spoke to on the podcast is under pressure to demonstrate ROI on these AI investments. But the truth is, so far at least, measurement's been kind of hard. And some of the easier metrics that you can produce are misleading, right? Just adoption of AI.

(00:40:30):
It could look great right away, but the real question is, where's value coming from and how do we measure it? So just kind of wanted your thoughts on that. And then I do have a couple quotes to share with you, but what's your kind of quick take on measurement

Ravi Malick (00:40:42):
On this? It has to reflect in the balance sheet. I mean, that is kind of full stop. It has to tie to revenue growth or margin expansion. I sometimes have a bad habit of oversimplifying things, but I think that's what it comes down to.

(00:40:58):
At the end of the day, that's how companies are measured. That's what the success is, whether you're public or private, it is revenue and it is profitability. My viewpoint on that hasn't changed as a result of AI. That's how I've always thought about technology that it is an investment and we look at it as just as you would put money into the market in any type of capital market, we should view it the same way, that if I'm going to invest money, and particularly in some cases, significant money, both hard dollars and soft dollars, that I want to see that return. I want to see how that is reflected in company performance. We have to do our level best to make that connection, to tie that to, "Hey, we're able to excel." We can get the cash faster. So I mean, it all has to come back to the dollars, right?

(00:41:48):
So because we can do that, our financial health is better. We are able to close the books faster, and that enables us to do more analysis in treasury. We're able to close deals faster because we shorten sales cycles and we reduce the amount of administering time in a sales cycle. We're able to service our customers better and we can retain more because we actually can meet the customer's specific need and we can be bespoke to those needs on a real time basis. Those are things that show up in the balance sheet in a company's

Jon Herstein (00:42:23):
Performance. Yeah. Let me share a great quote from you on exactly that point from Matt over at IBM. He said, "If you're just going to save someone hours, I think our friends at Gartner call this productivity loss. You're going to go to the water cooler. That is different from, am I impacting my unit cost or my flow velocity for my process that I can translate into an actual dollar?" Exactly to your point, Robbie.

Ravi Malick (00:42:44):
Yeah. And I do worry a little bit that we might see a little bit of a pullback because I think that there's a lot of OpEx that's kind of flowing through the system right now. We've come down off of the apex of the hype, every single CIO. It's probably the one time I actually say 100% of CIOs that I talk to are being asked this question, myself included, is how do we tie it to the balance sheet? How do you tie it to a dollar? Whether it's that customer engagement, it is scalability, right? Scalability. So you can grow independent of headcount and decouple the two lines. Being able to talk about it that way and make those connections, I think is going to pay dividends. And I think more so AI has the ability to do that, I think even more so than lots of technology in the past.

(00:43:36):
And so therefore, I think you really have to really hone in and focus on talking about it that way and articulating the value of it.

Jon Herstein (00:43:44):
Well, Stanley Toe over at Broadcom, a long time customer of ours outlined a very, very simple framework for the questions you should be asking before you greenlight any of these initiatives, AI or otherwise. And it was, what are you trying to solve? The second is what's the cost? And lastly, is it a passing trend or is this sustainable? If you can answer those three questions, you're well on your way.

Ravi Malick (00:44:05):
It is interesting because I think if you take those three points that Stanley articulated and laid out, you can look at folks who were early to the game who invested in AI. Not to say that they didn't necessarily get the return of the benefit of it, but they felt like, "Hey, I need to go be ... This is something material I'm going to put some dollars into." What they're figuring out is now they have to shift a little bit because they built on their own. So they either privatized the model or they built their own model, they built their own content layer, they built their own semantic layer, they built their own engagement layer. And what they're realizing, the cost of that, because the market is moving so fast, particularly in the models, the upkeep and the cost to stay relevant and to make sure that capability and that advantage that they unlocked is still relevant is becoming costly.

(00:45:00):
And so I think now this gets back to what lane should you be in on the highway, where do you actually need to be? And honestly, that could vary depending on where you are in business. But I think people are starting to realize like, and I don't need to be in the fast lane. And if I wait a little bit, the destination I'm getting to might look a little bit different, it might feel a little bit different. It might be better defined.

Jon Herstein (00:45:22):
I do think a lot of folks that we spoke with sort of struggle with this question of when is the right time because everything's moving so fast. If you're like, "Oh, we're going to go take what exists today and build on that. " Well, three months later, there might be a model that actually solves that problem for you natively. So that's a very interesting challenge. And it's not just about measuring ROI at launch or what you think is going to happen, but actually coming back six, 12, 18 months later to revalidate the business case. And Todd Ferris from Stanford Health said this really well. He said, "You've got to measure it later."

Ravi Malick (00:45:53):
Yeah, 100% agree on a more frequent basis than maybe other areas or in the past. It is changing. I saw this as an example of when we were looking at a platform to engage with our customers and effectively engage, provide service both externally and internally in just a six month period, which is pretty aggressive, pretty fast period. We had, I think, two new players enter the market, not one, two, arguably three players, and two in a market that when we started the process had like three maybe, and we had to kind of reevaluate, we had to reevaluate again. And that's an area that we've implemented, we're up and running, but we will probably look at it next year and to say like, is it still providing the benefit? Is there something that we need to do differently? Is that, do we need to swap out?

Jon Herstein (00:46:51):
Well, it also means you've got to get comfortable taking bets on companies that don't have a track record and being really careful. And you can't do that with every initiative, but you got to probably make a few.

Ravi Malick (00:47:01):
No, it is. I mean, CIOs to some degree need to have a VC mindset. That is something that you got to get comfortable with at every kind of company and probably even more importantly at large companies that you've got to have a bit of a VC mindset that you're going to make some calculated bets, but not every bet's going to pay off, but you hope that some of the other ones, the big ones really, really do pay off and that's the environment that we're in. You have to experiment, you have to work with this stuff to kind of figure out what it's going to do.

Jon Herstein (00:47:29):
Well, and maybe that's a great kind of segue for us to talk about what's next because there are going to be these experiments, but also you're going to start to see early indications of what's working. And so the question sort of shifts from, should we be doing AI, how do we do AI, et cetera, to how do we actually scale what's working? So I want to talk about 2026 and what we think is coming next. So you could argue that if 2025 and 24 were years of experimentation and building the foundations, this year is really about shaping up really about really the year of scale. And organizations really now have to do the hard work of taking what they've done, consolidating content, establishing governance, building adoption strategies, and now actually bring scale to that to solve real business problems. So first, if you sort of agree with that assessment, then I do want to share another quote with you.

Ravi Malick (00:48:19):
100%.

Jon Herstein (00:48:20):
Okay.

Ravi Malick (00:48:20):
I totally agree.

Jon Herstein (00:48:21):
Okay, great. So Steven from LPL Financial said, agent AI, and we'll get into agents a little bit here, that can actually take action for companies will really speed up the realization of the ROI that people have been looking for. Companies need to redesign their processes from the ground up with the idea that AI is now available. Agree?

Ravi Malick (00:48:40):
100%.

Jon Herstein (00:48:41):
Okay.

Ravi Malick (00:48:41):
100%. And this goes back to, and I'll tie this back to the change management side, the people side and that being the biggest hurdle. Getting people to kind of empty their mind, to empty their cup and think about filling it a different way has been incredibly hard. I think people naturally, they'll look at a process and say, "How can I improve it or how can I automate this piece?" And my response has always been like, "Forget about that. What do you want to achieve? What is your outcome?" Talk to me about your outcome. Describe that. Describe your ideal, like your Garden of Eden, right? What does that look like? Don't talk to me about how it is today. Talk to me about what it should be.

Jon Herstein (00:49:24):
Right. So it's a great point. You can't just bolt AI onto your existing processes and expect everything to magically get better. You've got to rethink the whole thing maybe.

Ravi Malick (00:49:33):
Yeah, because processes that were based on linear progression and logic, that could be completely different.

Jon Herstein (00:49:41):
Right. But at the same time, the accuracy is still improving. And so if you said, "I'm going to go redesign a process around agentic AI, but it's only 80% accurate," you're going to wind up spending a lot of time fixing things manually. And Kamal from University of Chicago said this exact thing. He said, "If it's 80%, that means there's 20% of transactions that you're fixing manually." And so he said, "He's eager to see how vendors are figuring that out specifically on the reliability part." We're seeing that with our metadata extraction product. Customers need to get to 93, 94, 95 plus percent accuracy before they feel comfortable saying, "I'm going to go fully automate this and just let it do the job for me. " Are you seeing the same thing?

Ravi Malick (00:50:24):
I am. And what's interesting is that, again, it's not really different than any other big technology that you put in. There is always the optimization period. There's always the period where you're trying to drive adoption and as part of that adoption, you're getting feedback, you're fixing bugs, you're optimizing it. I think the idea or the assumption that you don't have to do that with AI or that AI is somehow going to be different in that you're going to put it in and it's automatically going to be at 93%, 96%. I think that's probably a misconception because the differentiating percentage is going to be relative to that company.

Jon Herstein (00:51:03):
And also the use case, right?

Ravi Malick (00:51:04):
And the use case, right? So I think it's probably reasonable to say, out of the box, can I get accuracy in the mid 80s? I think that's probably a reasonable expectation that, yeah, out of the box, I should get 85% accuracy. I think that's a reasonable target to set. But the idea that I'm not going to have to do some level of work to get to low to mid 90s, that should be an expectation and that's what you should design for. And as you think about the implementations in those areas, that should just be part of it. We know that we're going to have to tune it and fix stuff and kind of work through it to get to that level of efficacy.

Jon Herstein (00:51:46):
And you make a great point. This is not new or unique to AI. You've had to do this in IT forever, right? Validate quality and reliability before you roll things out. But what is new is the pace of change, right? And this is surprised. People like us who've been doing this for a long time, John Allen from Vailer said, "We haven't seen this velocity of change in a technology probably ever." And he says, "In 30 to 36 months, we've gone from nobody knew what an LLM was to it has fundamentally changed the way a lot of people think about getting things done." Not necessarily that it's changed the way they do things yet, but the way they think about it on a day-to-day basis. And that is unique to AI.

Ravi Malick (00:52:25):
Totally

Jon Herstein (00:52:26):
Agree with that.

Ravi Malick (00:52:27):
The level of demand and desire uniformly across the company is higher than I've ever seen, honestly, in my career.

Jon Herstein (00:52:38):
Yep. There is another cool and interesting thing about AI that's unique that John also called out, which he said, AI teaches you how to use it. And I'm not sure we've ever experienced a technology that had that capability. What do you think the implications of that are?

Ravi Malick (00:52:51):
Yeah, honestly, I don't know if I have an answer for that. I mean, I would say we have some examples where that has ... Well, we've seen that, right? The iPhone, right? I mean, you could see that you put an iPhone or an iPad in the hands of a six-year-old and they figure it out pretty quickly. Honestly, I think it's pretty cool that, yeah, it can teach you how to use it. It can guide you how to use it. And I think that that's something that we can use to our advantage, but I also think that you can't over-index on that either. Just because something is easy to use or seems easy to use doesn't necessarily mean people are going to adopt it. And the way people adopt things in their personal lives is going to be very different than how they adopt it into work lives.

Jon Herstein (00:53:37):
Yeah, 100%. So it kind of goes back to this idea of experimentation, everything doesn't have to be enterprise grade, and you've got to really involve people in this because this is all changing so fast. I want to pivot now, and probably our last segment here is talking about predictions. So I'm going to put you on the spot, maybe myself on the spot. We'll talk about specific predictions for 2026. And I think we can come back in a year and see how we did. People can come back and hold us accountable to this, but let me start with the first one, and we've kind of collaborated on this. I'll put them out there, maybe some quick comments on each of these and why you believe them. But first one, by the end of the year, at least 50% of enterprise AI deployments will include autonomous agents handling multi-step workflows, not just answering questions, but actually taking action, 50% by the end of this year.

(00:54:28):
What do you think?

Ravi Malick (00:54:29):
I think it might be high. I think it might be high. Okay. And I think it's the autonomous

Jon Herstein (00:54:37):
Agents.

Ravi Malick (00:54:38):
I think maybe you argue what autonomy really means. Now, if you said 50% of companies of AI implementations will include an agent or maybe agent to agent interaction. Probably true. Yeah. I think that ... Yeah, I would say yeah, definitely. Yeah.

Jon Herstein (00:54:55):
All right. Prediction two. We're going to go rapid fire here. Ready for this one?

Ravi Malick (00:54:59):
Okay.

Jon Herstein (00:55:00):
Model commoditization, easy for me to say. And here's the prediction. The which model question will matter less as orchestration layers, abstract model selection? Organizations will care more about outcomes than underlying models. I'm

Ravi Malick (00:55:14):
Going to say yes, because I think that the majority of folks and how they're going to be using AI next year won't necessarily matter. There might be just a handful of really distribute use cases. If that trend continues, that the model stay specialized and they sort of agree like, "Hey, I'm going to be good at this and you're going to be good at that, whatever." I think that that changes maybe two or three years from now.

Jon Herstein (00:55:34):
All right. Prediction three. And this is one where I could argue you may be a bit ahead of the curve, but think about the market as a whole. This is around governance maturation. AI governance will shift from permission base to guardrail based enabled by default within boundaries rather than requiring approval for each use case.

Ravi Malick (00:55:52):
This is probably a, I really hope so, an emphatic, I want this prediction to come true. Simply because I think that it is survivability companies have to do that. And it goes back a little bit to trust. I think the more people trust and understanding. So understanding, and then through that knowledge, being able to trust how these things work and what controls I can put in place or guardrail we can put in place. I think that that should come true. All

Jon Herstein (00:56:22):
Right. Prediction number four, content intelligence. Automated metadata extraction will become standard, not experimental. Organizations will expect AI to structure their unstructured content automatically.

Ravi Malick (00:56:35):
Yep.

Jon Herstein (00:56:36):
Yes. Emphatic, yes.

Ravi Malick (00:56:38):
I think companies will expect that.

Jon Herstein (00:56:40):
All right. And then our last prediction, workforce evolution. And I think you actually alluded to this already. New rules we haven't named yet will emerge just as prompt engineer did in 2023. The AI augmented workforce will look different than we expect.

Ravi Malick (00:56:55):
Yes.

Jon Herstein (00:56:57):
Seems like a no-brainer. What's your hesitation?

Ravi Malick (00:57:00):
Maybe I was thinking about what percentage. I think that's probably what I was going through my head like, well, what percentage would that actually be? How much will it start to shift? I think the shift, the real tectonic shift starts to happen next year.

Jon Herstein (00:57:12):
All right. So those are predictions. And now we've got a couple questions that we got from our audiences. So both on LinkedIn and the community site for box of community.box.com. We had folks submit questions and vote on them. And so here's the top two questions from LinkedIn. And I'm going to ask you, Ravi, your take on these quickly. If AI handles the what, how do we grow our people's how and why skills, emotional intelligence, adaptability, and critical thinking?

Ravi Malick (00:57:40):
It goes back to the relationship. You have to relate with people. You have to show them why is their Gs being moved. As I like to say, you have to be persistent and consistent about the communication and what you're doing. We mentioned it earlier, people respond to people. If you come at it with that top down, like I am saying, therefore you shall do, that's going to be problematic. And so I think you really have to double down on engagement with folks in the field, in different roles, understand this is something I do on a regular basis. So I will go ask folks in different parts of the business like, "Hey, has your opinion of AI changed? The way that you're using AI, is that different? Has that changed? Has that shifted? What do you think about how we're approaching this? " Just regularly, just in one-on-ones and in casual conversations, just ask the questions.

(00:58:31):
And I think that certainly, one, I get a lot of intel from that, obviously, and but two, I'm engaging like, "Hey, this matters and I'm interested and I want to know. " And I think it comes back to that it's not the technology, it's the people.

Jon Herstein (00:58:46):
Yeah. And AI is one of those things where I think it is really unique in that people may have very different personal opinions about AI than their professional opinions in terms of like, what does it mean for me in my work and what does it mean for me in my life? And people may be conflicted. So checking in with people, it's a great thing to do.

Ravi Malick (00:59:04):
Yep.

Jon Herstein (00:59:04):
Okay. From the community, interestingly enough, our top rated question, top upvoted question was actually one we covered already in predictions, which is, will this be the year where the which model question will matter to less as orchestration layers, abstract model selection? Will organizations care more about outcomes than underlying models? I think you covered that pretty well. Anything to add on that?

Ravi Malick (00:59:24):
It's a matter of how long does the secret sauce stay secret in those areas, right? So how long is Anthropic and Claude, how long are they able to keep the idea and the IP that makes them a better code platform a secret? How long is Google able to keep the fact that they do really well in image creation and image generation, ChatGPT in terms of just deep knowledge and understanding, how long are they able to keep that unique and that differentiator? And I'm not versed enough well in the real deep nuts and bolts of the neural networks and things of that nature. To understand that's something that eventually just gets commoditized in the next year or the next two years, or it's so unique to their approach and their intellectual focus that it's going to stay around for five, six, seven years.

Jon Herstein (01:00:22):
Right. That is a tough one to predict for sure. All right. Well, thank you, Ravi. Thank you so much. And honestly, not just for this conversation, but all of your leadership at Box and with our customers, this past year has been an incredibly important year for AI and I think leadership around this and having these conversations is just incredibly important. So thank you. But also as we wrap up this special episode, I want to say a thank you to all of the leaders we've had on the podcast who made this last year such a rich year for learning for all of us. And I hope that our audience got great insights from these. So for all of our guests this past year, Kamal from University of Chicago, Stephen from LPL Financial, Matt from IBM, Stanley from Broadcom, Philip, AMH. We had John Allen from Baylor University, Mike and Todd from Stanford Health, Amel from Witham, Smith & Brown, Graham from November, Karthik from American Humane Society and Robert from Vornado.

(01:01:17):
Thank you all for your honesty, your candor, and frankly, your willingness to share what's really happening inside your organizations with everybody else as you navigate AI and help others. It's been really, really helpful. So thank you again. I think we've captured a lot of the key patterns, framework, some questions from all of these conversations into a great resource, a great asset. We're calling the 2026 AI Playbook for CIOs. If you're thinking about how to consolidate content, design governance that enables rather than holds back, that drives real adoption and thinking about measuring the value in a way that your CFO will buy into, this playbook is for you and you can download it at blog.box.com. Ravin and I also covered our five specific predictions for FY26. We're on the record. Come on back in December or January of next year and hold us to it. We'll score ourselves.

(01:02:07):
Finally, I'd say if this conversation was useful, I've got two asks for you. Obviously, subscribe to the AI First Podcast so you don't miss all the great conversations we've got teed up for 2026. And share this episode with other technology or business leaders that you work with, anyone who's trying to separate the signal from the noise on AI. This is AI First, where we keep challenging assumptions, stay curious and lead boldly into the AI first era. Thanks for listening, and we'll see all of you again in our next podcast episodes. Thanks for tuning into the AI First Podcast, where we go beyond the buzz and into the real conversations shaping the future of work. If today's discussion helped you rethink how your organization can lead with AI, be sure to subscribe and share this episode with fellow tech leaders. Until next time, keep challenging assumptions, stay curious and lead boldly into the AI first era.