AI-First Podcast

Baylor University is pioneering the AI revolution in higher education, accelerating everything from report generation to reshaping the future of security.

In this episode, Jon Allen, CIO & CISO of Baylor University, joins Box Chief Customer Officer Jon Herstein to discuss how Baylor balances cybersecurity with AI adoption, from improving workflows to enhancing collaboration. He also explores the challenges of data privacy, security, and preparing students for an AI-first future.

You'll learn about the future of AI in content management, how it’s being used to improve operational efficiency, and how to build a culture that champions both innovation and responsible AI use.

Key moments:
(00:00) Jon Allen’s dual role at Baylor University
(03:10) Defining what it means to be AI-first at Baylor
(07:00) Real-world examples of AI transforming workflows
(12:30) Leveraging AI to enhance collaboration and speed up operations
(18:00) Balancing cybersecurity with AI adoption in higher education
(24:00) The importance of education and transparency in AI adoption
(30:00) Managing data privacy and security challenges with AI
(36:00) Preparing students for an AI-first workforce
(42:00) The future of AI in higher education and content management
(48:00) How Baylor is evolving to support AI-driven innovation
(54:00) Closing thoughts on AI adoption, culture, and strategy in education

What is AI-First Podcast?

AI is changing how we work, but the real breakthroughs come when organizations rethink their entire foundation.

This is AI-First, where Box Chief Customer Officer Jon Herstein talks with the CIOs and tech leaders building smarter, faster, more adaptive organizations. These aren’t surface-level conversations, and AI-first isn’t just hype. This is where customer success meets IT leadership, and where experience, culture, and value converge.

If you’re leading digital strategy, IT, or transformation efforts, this show will help you take meaningful steps from AI-aware to AI-first.

Jon Allen (00:00:00):
How can we make sure that people recognize that AI is actually an enabler to greater human interaction? It's not a detractor from human interaction, and I think that's what it was branded to start. What it was branded to start was is here's the rise of the robots, here's the rise of the technology. And what I see is here's the rise of the ability for us to turn away from the screen and enable people and act with people again. Right?

Jon Herstein (00:00:28):
This is the AI first podcast hosted by me, John Herstein, chief Customer Officer at Box. Join me for real conversations with CIOs and tech leaders about re-imagining work with the power of content and intelligence and putting AI at the core of enterprise transformation. Hello everyone and welcome back to the AI first podcast. I'm your host, John Herstein, the Chief Customer Officer at Box, and I'm very, very pleased to be joined today by John Allen, associate Vice President, CIO, and CSO at Baylor University. Hi John.

Jon Allen (00:01:02):
Hi John. Great to meet you in person.

Jon Herstein (00:01:04):
It is great to see you and a great first name. So we got a great bond to start with that along with our love of ai. So I'm going to actually jump right in to the AI conversation, but maybe just before I do, if you could give a little bit of perspective on the university and your role there. You do have a unique role in having both the CIO and CSO roles at Baylor, and so maybe you could talk just a bit about that.

Jon Allen (00:01:31):
So for those who don't know, Baylor University's located in Waco, Texas private Christian institution with both Power five athletics as well as R one level research with still a core focus on teaching in the classroom. And so it's a really unique place when you look at the landscape of higher education. I've been blessed to be at Baylor University now full-time, almost 26 years. A majority of my career has been in cybersecurity. Everything from being an analyst to an ISO to a CISO about seven years ago added the CIO hat to that as well. So serve as both the CIO and the ciso. I often joke that I sit in my office and argue with myself because that's what CIOs and CISOs obviously are thought of doing often. But for me it's really been an enabler. It's allowed our organization to move faster, to understand risk across the technology enterprise and to make sure that in execution we're really doing cybersecurity throughout our stack of our organization and not just in a corner of the cybersecurity team

Jon Herstein (00:02:37):
Who typically wins the arguments. Is it CSO John or CIO John who's coming out on

Jon Allen (00:02:42):
Top? It's interesting. I'm not sure either. One of 'em is very happy most days to be honest with you, and that's reflective of good risk management. You need to make sure that you're enabling the business, but you're doing it in a way that you're protecting your constituents information, your intellectual property at your organization. And so I think it is a very tedious balance that really is on a case by case basis. And so that's one of the things that I enjoy. But at the same point, it's something that you have to take very, very seriously and

Jon Herstein (00:03:14):
We're certainly going to deep dive into the security implications of AI as we get in the conversation. But let me start with the basics, which is what does it mean to you to be AI first, both in your own roles but also there at the university?

Jon Allen (00:03:29):
So first and foremost, I think it's not being afraid of the AI technologies, understanding that it is a technology, right? This is not something that is so new and different that it's just completely foreign that a system could take the actions that we're seeing with some of these AI systems today. And I think that's important. I've often joked now it's just a new type of technology. The technologies we've used historically were black and white and no, we now have a third technology. It's gray, and that's what I like to often call AI technology is gray because at the end of the day, it's less deterministic in nature, and I think that's been the piece that's made people shy away from it. It feels more humaney at that point because humans are more gray in how they approach and answer things. And so I think that's one of the things that's been conversations that we've had is you have to understand this is a new type of technology. It's a technology that's not going to go away, and at the end of the day, it's going to be a technology you're going to experience and not even realize that you're experiencing it more and more. I think that's the piece that is moving so quick in the landscape today.

Jon Herstein (00:04:43):
You mean in the sense that it's going to just be embedded more and more into other solutions and technologies?

Jon Allen (00:04:48):
Absolutely. I mean, look at search engines today. You back up a year ago, you didn't see AI in search engines. If you pull open a search engine today, you will absolutely be presented first and foremost with AI results. And I think that's a really key piece is to understand we're not going to be given a lot of choices, consumers as to where AI shows up in our lives, but we need to be educated and understand what it is, how it works to be strong protectors of our own information and interactions with these new technologies that we're being faced with.

Jon Herstein (00:05:23):
That's a great example. Another one that I've come across, I'm sure you have too, is in product reviews where it used to be like you just literally had the reviews and you had to read 'em right

Jon Allen (00:05:31):
Now. Exactly.

Jon Herstein (00:05:32):
AI summary of what are people saying about this product?

Jon Allen (00:05:35):
Right? Right. Oh, I see that as well. And it's interesting, I wonder if the general populace even realizes

Jon Herstein (00:05:42):
That

Jon Allen (00:05:42):
That is ai, that

Jon Herstein (00:05:43):
It's ai.

Jon Allen (00:05:43):
How much do people even realize, Hey, I've already experienced ai. I've already seen ai. I think it's interesting you see it some in promotion that people are promoting, look, we now have AI reviews or AI summaries, but if it wasn't for that, would they even think about it? Would they just think, oh, this is a new cool feature?

Jon Herstein (00:06:01):
Right. Yeah. Well, we'll get into a lot of those examples, but I am curious the current state of things today, what practical ways is Baylor already using ai? And you don't have to list all of them, but what are some of the things that stand out?

Jon Allen (00:06:15):
Yeah, I think for us, it's interesting because it's deep in the organization in a lot of cases. So much of our organization has been leveraging platforms that have enabled AI already. Box is clearly one of them. And so people being able to take significant documents from their department, maybe end of year reports and pulling together a draft of a divisional report in minutes, that would've taken weeks in the past. I was just talking to a leader the other day and they were just amazed with how well it did that, how strong the draft was, and then how it allowed them to really put their energy towards additional value and not just having to go through and Oh, is this the factor? Is this the fact? Right? It really does a strong job of giving you that initial starting point that's been really, really significant for a lot of our team ization of data obviously is a bread and butter when it comes to a lot of these AI tools.

(00:07:19):
I think the other thing is, and I'll say I've used this personally, things like drafting job descriptions. As a leader, you talk about a task when you stare at a blank page of paper or a template that you just say, oh man, where do I start? And frankly, I think these tools do a better job than if I had a committee around a table spending a day working on it, and they get you 80, 90% of the way in seconds. And so there's been really simple things like that. Even things like being able to compare RFP outputs. So you go to market on an RFP, you get these massive documents and you're like, okay, I just need to know where to start. Some of the summaries that you can get out of those that are really quality and really allowing you to say, okay, those are areas I need to focus in because those are where the differentials exist in these solution providers responses, massive time savers. And I think that's the piece for me is we're getting better quality because we're able to focus on the value add that the employee can bring versus some of the tasks that traditionally were just tasked. Right.

Jon Herstein (00:08:35):
And one of the things that comes up quite a bit in those scenarios is the question of who's ultimately accountable for whatever the output is. So if it's a summary of the department's work over the last quarter, sure you can have AI draft something for you, but do you feel that people still understand that they own the output?

Jon Allen (00:08:54):
It's a narrative that we push a lot. It's the person sitting beside you, it's the team member that's just virtual in nature. It isn't the replacement, it isn't the thing that is making the decision. It isn't the thing that is writing something that nobody will review. I think that's really key. And as we talk through these things, it's important to keep the human element involved. And I think we as an institution, both being an institution of higher education that focuses on human flourishing, that focuses on human interaction in those ways, we want to make sure that we're speaking into that, right? And not just in our own institution, but more globally. How can we make sure that people recognize that AI is actually an enabler to greater human interaction? It's not a detractor from human interaction. And I think that's what it was branded to start. What it was branded to start was is here's the rise of the robots, here's the rise of the technology. And what I see is here's the rise of the ability for us to turn away from the screen and enable people and act with people again, really have conversations because we're not buried in so much of those pieces. Let innovation happen. Greatest innovation happens with people around a table, a coffee pot, a cooler, not necessarily sitting there typing away on their screen all day trying to get their tasks done.

Jon Herstein (00:10:30):
I love that framing and what better place to accomplish that than on a university campus.

Jon Allen (00:10:35):
Absolutely right.

Jon Herstein (00:10:37):
So I am curious with a lot of systems that we've had historically, we have a lot of data. You as the CIO and the CISO have a lot of data about usage patterns, what people do with those systems, how they're sharing data, et cetera. With ai, it's still a little bit new in terms of the ability to monitor, look at patterns and so forth. But what are you doing? What are we able to see today and how does that help inform what your strategies will be going forward?

Jon Allen (00:11:03):
For us, I mean it's really looking at things from a macro level. I always highlight our systems are not about getting into content detail, usage detail. We can't review what prompts people are typing or things like that. But what's helpful for us is to know how many interactions are happening and what's changed, that there are more interactions. Again, I'll give you a great example. Simple things like making a button accessible within an application versus having to go to the web to access it. That is massive. When it comes to all of a sudden that level of accessibility greatly increases the number of interactions very similarly. I mean you guys traditionally it was, Hey, go to the web to be able to access the AI function. As that now starts to show up in the mobile function, I expect to see many more interactions because it's accessible and it's the way that people, oh, I've got to run off to AI island, do my work and bring it back, or I have to go through this extra three steps to get to it the closer it is, and the less it feels like an additional action, the more we're going to see users interact with it, use it, and begin to see the enablement that comes from it.

(00:12:22):
And that's for me, the most important piece is making sure people get some wins, right? Wow, I didn't know it could do that. I didn't realize how easy this was. And as soon as you start to have those things, I think it starts to snowball. We always talk digital natives, and obviously being a university, we get students coming in 18, 17, 18, 19 years old who are absolutely digital natives. What's crazy is there's already AI natives starting to happen. I look at my kids, my youngest two kids are 11 and about to be 14. They think AI natively already. When they're faced with a problem, the first thing they want to do is interact with AI to get context. And I think that's something we have to get prepared for because even myself, I'm having to think how do I rewire my brain to think AI first? I didn't have to do that as much with technology. I grew up with it, but I didn't grow up so much with this AI stuff. And so it's making me kind of have to rethink how I approach work, how I approach my day-to-day life, and making sure that I'm leveraging those tools as much as possible as an enablement, as a force multiplier and increase quality of what I'm able to accomplish.

Jon Herstein (00:13:40):
Well, absolutely, and I think it's going to affect, and I think it is already starting to affect companies and organizations where you've got people coming out of college, let's say it's their first job, they're moving into the corporate world, and to your point, they are effectively AI natives. They've been using chat GPT and other tools for the last couple years of their college career, and they're going to come into the workforce and expect exactly the same thing. And if we don't provide that and some guardrails around it, they're going to struggle, right?

Jon Allen (00:14:08):
Yeah. And I think what you said is so key. The guardrails, right? The context that they learned it in is not from a context of intellectual property is not from a context that is so critical in the marketplace. And we as institutions need to make sure that we're educating students before they leave that these are important concepts to understand because you don't want to learn that lesson in your first job. That's not the place to go and make, oh, I wasn't supposed to submit the plans for this great new thing into a public LLM. Oops.

Jon Herstein (00:14:47):
So let me ask you about that. I wanted to pivot here anyway, and again, you're uniquely positioned in your dual roles to be thinking about this, but how do you foster a culture there, but also in the corporate world that balances the desire to use these tools and be innovative and take advantage of them, but also be aware of the risks and what you can and can't or should or shouldn't do?

Jon Allen (00:15:12):
It's all about education, and I think for me, I highlight the golden rule. It sounds crazy, but the golden rule for me is the golden rule of data. Treat your data or your information that you're interacting with at work every day the way you would expect your personal data

Jon Herstein (00:15:29):
To be treated.

Jon Allen (00:15:31):
That's what it is at the end of the day. It's having that conversation about if it's public information, if it's information that doesn't hurt your organization, they have a classification to be reflective of that. And that means it wouldn't matter so much where you put that information, that's the expectation. But if you're dealing with information and everybody's categories are different, whether it be protected or restricted or non directory or you need to take pause and just like you wouldn't put that information on a public storage platform, nor should you put it in a public LM platform. And so it's so funny because in these situations people want to rewrite everything around, we need an AI usage policy.

(00:16:18):
Well, what's wrong with our data use policies today? And just recognizing there's now an additional modality that this risk exists. And I think that's the piece I really focus on is because I don't want to talk about just AI and cyber risk. I want to talk about technology and cyber risk because the risk is there if they're using a non-institutionally authorized storage platform, compute platform, LLM platform. And so the theme and the concept should be the same. It's just the examples you're using. And what you want to build is that framework so that when they get that idea in their head, they take pause and it becomes second nature to think, okay, I need to make sure I'm protecting my organizational information. Where should this fall in the matrix that I've been taught?

Jon Herstein (00:17:10):
Right? Well, you make a great point about your reflexive action might be just go write another policy that's your AI policy, and actually at the core of it, what are the principles you have around things like data privacy, security, et cetera. And those principles apply to everything. So don't treat it with this one-off thing that you've got to put constraints around.

Jon Allen (00:17:32):
Exactly.

Jon Herstein (00:17:34):
So we're chatting a little bit about innovation, and I want to go back to a story that I heard about you, which was about a decade ago you took a risk in moving Baylor's identity management solution to the cloud. And I'll explain why this is relevant in a minute here, but can you talk a little bit about taking that risk? It was pretty early, I think, to be making that kind of a call. What drove that and how did you think about that calculated risk?

Jon Allen (00:17:59):
So it's interesting. I think people expect, as a cybersecurity professional, I should be as risk adverse as possible tinfoil hat. And I think for me it was really seeing the benefits and understanding the risk allowed me to see institutionally from a, and when I say risk, right, business continuity falls into that conversation. People often look over that they, they want to talk about, when they talk about cybersecurity's confidentiality, they often overlook things like business continuity, which is availability, which is another huge tenet of cybersecurity. And when you talk about things like identity management, being in the cloud really checks that box in a really significant way because now I'm not tied to my institution. More and more of my resources were starting to move to the cloud anyway. So it was really about finding trusted partner that was going to execute in a very, very strong way.

(00:18:58):
The way I often aligned this is I have a Harley Davidson motorcycle. People say, Hey, you love having your Harley. I said, yeah, and we talk about you have to get a service and stuff. I'll say, yeah, but I always take it to somebody who knows Harley's, right? I don't go down the street to just a general mechanic to get that addressed. And I look at cybersecurity and solution platforms the same way is I'm going to go to an identity group. That's what they do every day identity. And so they're able to be focused on providing the most secure cybersecurity platform for identity. I'm not going to go to a generalist to provide me a really strong identity suite. And traditionally, we as it shops, we're a bunch of generalists. Let's be honest. We didn't have a lot of deep, deep specialty because I didn't have a bench of 50 people working on identity. I had two. And so that was the reason I made that decision is by shifting that it then allowed my two identity professionals to work on what is unique about identity at Baylor. And so that's a piece of the puzzle for me is specialization, I think is crucial and understanding how does specialization improve your risk posture as an organization?

Jon Herstein (00:20:19):
Right. Well, and I think the other benefit you get from that approach is the innovation in that space will be much faster if you're dealing with someone who only does that, right?

Jon Allen (00:20:29):
Absolutely. And we've seen that across the board, the move to SaaS solutions, the idea that upgrades don't happen every two years. They happen every six weeks, they happen every quarter. It's the joke, you wake up in the morning, you pick up your phone, you're like, oh, it's different today. And we've just kind of gotten used to that. Technology is constantly evolving and changing, and many times we're not even aware that that's happening. We just are. Oh, it goes back to enabling transparent. And that's part of that world is that those changes just happen automatically for you without you having to interact and even do much for it.

Jon Herstein (00:21:10):
And I think that also has a sort of downstream effect on even just the profile and makeup of your teams. As you pointed out, it used to be that it was just a lot of generalists. You might only have a couple people specializing in any particular area, and now it's much more about how do you advise, how do you decide on which tools to use? And then how do you orchestrate them working together? I think So is that kind of how you've evolved the organization as well?

Jon Allen (00:21:39):
I would say that our teams have become more business aware over the last five to 10 years, and they were historically, I would also say there's been greater specialization. I look when I came into it in the late nineties, I would do networking. I would do some server work, I would do, that's not the case anymore. People tend to specialize in a or couple of domains within the technology organization. And then you may have people that do have that broader knowledge, but when you have that, usually they're of an architecture type role, things like that. And so it's certainly changed from that perspective. But the other piece is I think one of the things we are tasked with as technology leaders is to map the needs of our organization into our technology suites in ways that we weren't historically. If you look at the suite of tools that we had historically, they were pretty small in the scheme of things 10, 15 years ago.

(00:22:44):
You look at anybody's service catalog today, that's a major enterprise and they're massive. And so helping our user population navigate that and not just buy another widget, right? Because that's what ends up happening. They go and do a search, they identify a widget, this is the widget for my need. And it's like, let's talk about what your need is. Let's map that into our current catalog. And oftentimes we're able to meet those needs without expanding the catalog, which as a business leader is significant. You don't want to have sprawl of technology where it's not necessary,

Jon Herstein (00:23:19):
And you have to then think about all those new technologies coming in versus what you had before. And I do think people tend to think, well, I've got this very unique set of requirements that only this one solution can meet. And then you've got to look at it and say, well, actually 90% of what you need is here.

Jon Allen (00:23:36):
And it's interesting because as you can imagine in a university, we see that, and I will agree at times, there are absolutely unique needs when you're doing cutting edge research, when you're doing cutting edge athletic performance, there may be one startup that does what you need and it's very, very unique and new to the marketplace. The other reality is a lot of our needs on a day-to-day basis fall into normal technology needs. It can be met by a number of different solutions, but we've standardized on a one or two solutions to meet those needs. And so when I look at it, I'd say probably 90% of the things that we see on a day-to-day basis fall into our current service catalog and a very small percentage every year fall into that, Hey, this is special. This is something we don't have. This is something we need to explore.

Jon Herstein (00:24:30):
So John, when you do have a situation where one of your stakeholders does really actually does legitimately require another solution and you've got to go vet that solution before you bring it into the service catalog, how do you think about evaluating those new solutions for risks like data privacy, security, maybe even the financial integrity of the vendor, all of those things. What does that look like?

Jon Allen (00:24:55):
So it evolves constantly. First, let me say that right? You go back 10 years ago and the questions you asked were, do you require a password? Some of those things. And now it's much more sophisticated. What is your cyber risk insurance plans? What is your audits that have happened? So for us in higher education, we always lean towards the heck fatt, the higher education community vendor assessment toolkit, which has pretty much become a standard across higher education for evaluating these solutions. It's accelerated evaluations because many vendors have become aware of that tool and already have completed hack fats to provide. And even as recently as this past fall, AI questions have been added into that. And so it's very much something that's keeping us on the cutting edge of what do we need to be thinking of from a risk standpoint, what's coming down the line?

(00:25:55):
And rather than each of us as institutions approach this individually, we're approaching it as a community. And that allows us to really focus then on what is our unique use case of this platform and what are the maybe few unique questions that we as an institution, maybe because of the state we're located in, maybe because of some other things that we need to focus on, some additional things beyond what would be considered most what institutions address. And so our team spends a lot of time going through these reviews. I would say one of the things that people are always shocked by, so you talk about an average solution provider review, and it's somewhere between 20 and 40 staff hours involved in actually completing those reviews. We're continuing to accelerate that, right? I would say it was more than that historically. So it's dropped. I can certainly see in the future how AI is going to continue to help us refine that. Believe it or not,

(00:26:53):
Hey, you give a good AI tool a heck that, and it highlights to you, Hey, these answers are really not good in this area. You need to dig in deeper there. And so just being able to hone our analysts into the right directions, that future's coming very, very quickly. And so it's not only from the, are we evaluating the use of our data in these platforms properly and getting those additional risk items like ai, it's also looking at how can we accelerate review of these platforms? Because what nobody likes to do is to say, oh, you want this new platform? Let me get back to you in four to six weeks when we have the security review done. Right?

Jon Herstein (00:27:30):
Right.

Jon Allen (00:27:31):
And sadly still today, sometimes that is the answer. That answer has changed dramatically. We've been able to get much faster at that. But going and looking into things like AI certainly make it more complex than it was traditionally because even the solution providers struggle with some of those answers today when you're talking to folks who are like, I'm not sure I'll need to find that out. And so they're adjusting and having those conversations as well.

Jon Herstein (00:28:03):
I definitely want to dig into you how you evaluate for AI capabilities in just a second, but maybe just a couple quick things on the heck that first of all, that use case that you described, I've had that exact conversation with other customers of ours around leveraging box AI for exactly that use case, which is go validate something. And it is not so much to make the decision or say, yes, this passes or doesn't pass, but to say you as the human approver, here are things to really pay attention to, maybe dig into and throw more. And it's got to a template for it because you know what a good heck that looks like.

Jon Allen (00:28:34):
Exactly. Exactly. And that's something that a few years ago we couldn't have imagined something like that. And to have that at people's fingertips so quickly now has been really empowering.

Jon Herstein (00:28:47):
Well, and if you could take that 40 to 60 hours of staff time and not only reduce it, but also compress it in a way that the turnaround back to the requester is faster, it just means we accelerated the velocity getting that through the system. Right,

Jon Allen (00:29:01):
Exactly.

Jon Herstein (00:29:03):
So maybe a tip for anyone out there listening who sells to higher ed, maybe get on the heck fat and make sure you understand what's in there and how to complete one.

Jon Allen (00:29:11):
Absolutely. Absolutely.

Jon Herstein (00:29:13):
Okay, good tip. So let's go back then to ai, because the other challenge that we're seeing is vendors who've already gone through your process of being approved for security and data privacy and all that now add AI to their product, and there's a whole nother process perhaps that they need to run through. So how are you all approaching that?

Jon Allen (00:29:31):
And that's really what we call continuous assessment of vendor solutions. And it's a challenge. A lot of organizations have different approaches to that. It can be risk-based, which goes back to the data, it goes back to what the system's actually doing, and you have a risk profile for it that may say on annual, on a biannual, we're going to go back and refresh to make sure that the assessment we did at the time that the system was purchased is still right. It's still current. Those are time consuming efforts. I mean, there's GRC tools, all the things out there to do that. I think the AI piece has definitely forced that to happen more often than it probably had traditionally. So you get the message, Hey, by the way, effective two weeks from now, we're enabling AI in your current platform. And everybody takes pause and it's like, whoa, wait a minute. What does that mean? And so we have to jump in and look at those magic questions. And they're not shocking questions, they're not surprising questions. It's like, are you training on our data? Is our data being poured into a larger corpus of training material? All these things, your queries, it sounds crazy, but prompts themselves are considered confidential information. How are those prompts handled? Are they stored?

(00:30:58):
Are they used in any way for further training? Just the prompt itself.

Jon Herstein (00:31:03):
Who else has visibility to the prompts,

Jon Allen (00:31:05):
Who all has visibility to those prompts? And so it's new questions that we hadn't had in the past, and frankly, a lot of the organizations who are doing what I would call new AI integrations are expecting those questions. Now. I think we're in a much better space than we were back up 12 months ago and everybody was like, Hey, we just plugged this in and Oh yeah, we didn't think about those pieces. Everybody was racing the market so quickly, right? I think that we're kind of at that second gen spot where people are solution providers are coming in and saying, okay, yes, we have those answers and here they are and that you may have some follow-up questions. A lot of it for us now is making sure educating our users because hey, they use a free public version of a tool. Hey, that's not protecting our information. You need to make sure to look for this icon or this thing to make sure that you're in the protected version of the platform. And those two platforms may look almost identical. So there's so many challenges in this space and that traditionally you weren't faced with that as much. We didn't all run an internal internet search engine that everybody was using.

(00:32:21):
And at times, I'm sure people put information in those that probably we would've all preferred not been placed in those engines, but those engines weren't training on that necessarily in the way that we're talking about with LLMs and ai. And so I think that's the piece that's really changed the dynamic is, I joke, social media is a great example of this. If it's free, you're the product,

Jon Herstein (00:32:48):
And

Jon Allen (00:32:48):
If an LL M is free, I can promise you that your intellectual property that you are submitting to it is absolutely the currency that you're using for the usage of that LLM.

Jon Herstein (00:32:59):
Yeah. So read the terms of use very carefully.

Jon Allen (00:33:02):
Absolutely. Absolutely.

Jon Herstein (00:33:04):
Further to that topic, what guidelines have you all produced? We talked about policies a little bit earlier, policies, principles, guidelines, however you frame them. What have you put in place that requires maybe human oversight of the outputs of AI or what prohibitions you put around what you can put into ai? What have you put out there for folks?

Jon Allen (00:33:23):
So we have a guideline published out on our website, touches on a lot of things you just mentioned, making sure people are aware of current policies, data usage, things like that, making sure that people are encouraged to explore and think about these things. And the other thing, people hear the word policy or guideline and they're like, oh, here they are. Just tell me no, that shouldn't be what happens in this case. This should be a how do I use these things? What things do we even have, right? That's the other piece that's so funny is oftentimes people are like, well, I go out and use this because we don't have any AI tools. And it's like, well, that couldn't be further from the truth. We've got a really nice suite of tools that are available to our campus population. And so it's more than just a guideline.

(00:34:12):
It's really all of those pieces. It's available training, it's things to understand how to use it, where to use it, and to stay up to date. That's the other challenge. We put our guideline and we're already looking to review that again, because the space changes so quickly. What may have been things that people were really uncomfortable with from a usage perspective 12 months ago, that needle may have moved, right? And all of a sudden people say, I'm not so worried about that I was before, so we should consider these particular usages as well. And so I think that's the piece that's unique in this space is we haven't seen this velocity of change in a technology probably ever right? As the rate of change is so fast in this space that nothing raffles it. I mean mobile, it took 10 years to get to what we consider modern mobile technology with phones and things like that. In 30 months, we have gone from nobody knew what an LLM is to it has fundamentally changed the way a lot of people think about getting things done on a day-to-day basis

Jon Herstein (00:35:30):
And an incredible choice of LLM, right? It's not just the one you pick for which use case. And I actually want to ask you about not exactly that, but we obviously have a lot of capabilities in box ai, and I'm curious what is Baylor doing now and thinking about doing with the AI capabilities that we're providing given all the content that you already have in box?

Jon Allen (00:35:54):
So it's interesting because people always talk about models. I think that's what's been interesting about the gen AI conversations, and you won't hear me say gen AI a lot because I think as we talk about AI too often we aren't including machine learning. We're not including RPA or janik type things as much. And so in that space, what's been interesting is people, I really like this open AI model or I really like this Gemini model and it almost feels like the PC apple wars or the beta VHS wars, this is my model. And it's like at the end of the day, every one of them is good at certain things. I'm not sure any of them are good at everything. And so researchers, especially on our campus, would say, I need to get access to multiple models and there's no easy way for me to do that. You guys have enabled that a huge way to be able to go into a single platform and just click a dropdown and select from multiple lineages of models and see how the results vary so greatly, frankly, you had to build APIs and go against different gateways and okay, we're going to try to build a platform to enable people to do this, and it's just a dramatic change. And I think that's the piece for me that the pace, you guys have made these changes. I mean, go back six, seven months ago, there was a model that you could use and that was the model.

Jon Herstein (00:37:26):
And now

Jon Allen (00:37:26):
You've gone from that to what we're up to three, almost four lineages available through the platform now. And I think people don't realize when you go and you get used to using one model, you don't realize how the models are unique and what they're good at and things. And so for me, I think it's going to be really empowering for our campus to explore this, to understand it, to have it to where it's not a difficult barrier to get that access into multiple models. That's a really, really big piece of the puzzle for me is to make sure that as we do this, that it is simple. In no way should you have to be a technologist to use these things and to explore these things. They must be very, and I don't use the word accessible, but the word I like to use is democratized. Everybody across our campus can get access to these technologies, and I think you guys have done a really good job of stepping that up really, really quickly in the marketplace.

Jon Herstein (00:38:34):
Thank you for noticing that. I actually think we almost don't talk about this enough because the power of that just, I mean, I myself have taken advantage of this capability where you say, I can take the same prompt, the same set of content and then test it with different models and see, well, which one is closer to the thing I'm actually trying to get as my result? And then it may turn out that this one's better for this use case and this one's better for a completely different use case and you're not locked into any one, which I think is really important, right?

Jon Allen (00:39:04):
And you're not moving your content all over the place. That's the other piece I think, and that's putting on obviously my cyber hat, right? The content lives one place within one security domain that's highly secure, and that's something we shouldn't take for granted. Data sprawl is the enemy of cybersecurity, and so having someplace where people can experiment and do all these things without having to move their data all over the place, it's extremely positive from a risk posture as well.

Jon Herstein (00:39:40):
Yeah, I totally agree. And that's a key part of how we thought about this from an architecture perspective. Not to jump around too much, but you did mention AgTech ai, and so I am curious how you are thinking about that. What does that mean to you and are you seeing use cases in practice already taking advantage of agents?

Jon Allen (00:39:59):
So it's early, obviously in this space, this stuff's all new. I think we are starting to talk about use cases and we're starting to explore use cases, everything from how do we do things, advise our students better,

Jon Herstein (00:40:14):
How

Jon Allen (00:40:14):
Do we make sure that when people want to explore policies, that there's a predetermined easy way for them to interact with a policy repository, things like that, that I would call simple use cases because then you get into these more complex use cases where it's agents of agents and you've got these little pieces. I joke, it's kind of like when we got into microservices

Jon Herstein (00:40:41):
And

Jon Allen (00:40:41):
Everybody was, oh, microservices is going to change the world. I expect we're headed down the same path, agent specialization, all these different pieces. I think a couple of things that are going to be interesting for me is to see how much cross operation happens in this. Because for agents to be exceptionally powerful, you don't want them blocked within just a single platform. That's where it's going to get really interesting is agent interaction, agent handoff, agent linking. So I kind of have jumped two or three steps down the path there where it's like, okay, it's not just that I want to have an agent within box. I want to have an agent within box that can interact with my agent in my ERP that then can interact with my right. And it's like, okay, now this is where it's going to start getting next level powerful from an execution standpoint. And I think we're going to get there way faster than we anticipate. In some ways

Jon Herstein (00:41:45):
I think it's moving very quickly. I just saw a demo just the other day of one of our agents doing metadata extraction from content and then a second agent using a completely different prompt and a completely different model doing validation of the extraction that was done by the first agent and pointing out where there may be issues. And that within our environment now, like you said, you add ERP, you add a financial system or student record system and have 'em all work together. It's exciting.

Jon Allen (00:42:13):
I think the other piece of a agentic AI that I'm just now realizing is it really abstracts away prompt engineering from people. And so whereas before when I wanted to have a conversation, I had to really engineer this prompt, well, what if I just had a specialized agent that was pre-engineered for what I needed to do? And so I think one of the things we joke, you go back way, Hey, you used to have to know how to edit a config this in an auto exec bat file to get your DOS system working correctly. I'm dating myself quite a bit there.

Jon Herstein (00:42:51):
Same. I'm there right there with you, Jeff,

Jon Allen (00:42:53):
Right? And so, okay, we go way back. How much is it going to be in five years that anybody even understands or knows how to engineer prompts as lay person interacting with these tools? And how much is this just first generation technology that's going to continue to refine and change in ways that maybe that won't be the skillset that's so critical going forward? And that's one of the things I've been thinking about as well. We're starting to already see it with the age antic ai because pushing it into the background of even knowing what the prompt is and how it works,

Jon Herstein (00:43:27):
It seemed like early on with generative ai, there was a period where it was like, ah, prompt engineer, that's going to be the next big job. And then that seemed to go away. To your point, if the end user doesn't need to be the one engineering the prompt, who is doing that? And is that a job of its own or do you have a point of view on that yet?

Jon Allen (00:43:48):
I think people are going to have to understand this technology. And when I say that they're going to have to understand that they're going to have to understand what makes it click a little bit. I don't think they're going to be able to be completely abstracted away from it holistically. At the same point, when we talk about building really sophisticated agents, I think there will be a new skillset around prompt engineering, and maybe it'll be called something else over in Chile that is different than what we've done traditionally. And what's funny is you think a programmer great prompt engineering isn't programming either, right? It's really got some unique characteristics. Some of the greatest prompts I've seen have come from people who are not technologists. There are folks who really understand language and how to ask and instruct things. And it's a completely different domain than you would expect.

Jon Herstein (00:44:48):
There's a teaching element to it as well, here's something I know how to do really well, and I need you to do the same thing for me maybe in a different way.

Jon Allen (00:44:56):
Exactly.

Jon Herstein (00:44:57):
Yeah.

Jon Allen (00:44:58):
And I think the other thing we're going to see, and I've already experienced this, the technology will do it itself. And so you sit there and you're like, Hey, I'm trying to accomplish this. Help me write a prompt. Well, now all of a sudden I am vetting if something is written to do what I want accurately. I'm not starting with that blank page. And so I think that's the other piece about this technology that feels a little scientific sci-fi, terminator rise of the robots is it teaches you how to use it. And I'm not sure we've ever experienced a technology that had that capability that could really teach you. I mean, imagine if you get out in your car in the morning and it didn't start and you could say, Hey, can you tell me what's wrong with you and what steps I should take to That's what you can do with these AI technologies. Hey, that output isn't what I wanted. Help me understand how I would refine it to get this instead, and it'll give it to you. And you just sit there bewildered being like, wow, it's both the teacher and the student at the same time.

Jon Herstein (00:46:04):
Yeah, I just saw an example of an HVAC technician who was working on a unit he'd never seen before, never worked on before, and was able to take a couple pictures, maybe a video, describe what was happening to ai, and then have it come back and say, well, here's what you need to do. Right?

Jon Allen (00:46:20):
So

Jon Herstein (00:46:20):
It's not taking that job away, but it's making that person way more effective at doing the job.

Jon Allen (00:46:25):
Exactly. Exactly. I don't think we're very close to AI and robots replacing human hands in a lot of situations at this point, but there's definitely going to be some evolution in this space faster than I think we had originally anticipated.

Jon Herstein (00:46:45):
So I wanted to talk about the user side of this a bit too. And you touched on change management earlier, but I am curious how you and your team are thinking about the evolution of the roles of users and how much maybe of the responsibility of that education around how to use these tools and how to take advantage of them and do them safely and so forth, sits with your team. Is that someone else at the university and maybe how is that evolving?

Jon Allen (00:47:12):
Believe it or not, I think it's a yes to all

(00:47:16):
When it comes to pedagogy in the classroom. We have great people that teach people how to teach, to teach how to make sure people can learn what's being taught. What our job is is to make sure that those folks that are doing that understand the tools, understand the caveats, and make sure that wherever possible we're there as a resource to enable those conversations. Where I find myself is oftentimes talking to leaders, talking, how does this strategically impact things? Likewise, on the administrative side, we're probably going to be a little deeper involved there, making sure here's the tool sets, here's how the tool sets impact things. And the other place that we're really pushing into is going to be in the research space, right? And this is more from the standpoint of the researchers have the ideas of what they're trying to accomplish, and they're going to come to us and say, how is this possible?

(00:48:19):
Or how can you enable me doing this? And so we really play that role of enabler across those areas. I don't expect that we, as it are the advocate saying AI for all. That's not our role. I think our role is making sure that people have the knowledge and the tools to leverage appropriately where appropriate, and that's something that needs to be decided by each domain. We're unique. Being in the university, I really do have these three unique columns of teaching and learning, administration and research, and each one of those may have different answers to the same question based off of ai. And so that's something that I think we've got to be prepared for and be able to navigate with them hand in hand.

Jon Herstein (00:49:12):
Couldn't agree more. And I do want to talk a little bit about what you see going forward, but before we do that, let me pause and ask you either for fellow CIOs or CISOs and in either the higher ed space or in commercial, are there two or three lessons you feel like you've learned over the last 30 months as you said that you would want to pass on?

Jon Allen (00:49:30):
Don't always assume what's going to happen in the marketplace. When I say that, I think I assumed early on that this was just going to be embedded in everything very quickly,

(00:49:44):
And I sort of was a naysayer to some of the standalone approaches. I think what happened is because of economics and other things, it took us longer to get to this point that we are at already. Now we're starting to see the emergence of what I would say is second generation embedded AI into technologies as just a standard. I'm not buying something extra. I'm not having to do a lot of things that's just there. And so the first thing I guess I would say is don't necessarily assume things are going to get there faster, because the marketplace sometimes will naturally slow things down for other reasons other than technology. Just like the great best technology doesn't always win. We've all seen that historically.

(00:50:35):
The piece that I always highlight too is the run to policy. I'm not one to run to policy. I think if you have really well-written policies that there may be a few places that you need to tweak them, things like that. But we all created, well, I'll say we didn't. A lot of organizations created mobile technology policies, mobile tech's, just tech, right? There's some caveats with it, but it's a caveat with a laptop versus a mobile phone versus, and oh, by the way, that's 90% of the technology that everybody uses today. So maybe our base policy should just be good enough to adapt to those modalities. And I think AI does bring a couple of unique questions to the table,

Jon Herstein (00:51:21):
Of course,

Jon Allen (00:51:23):
But there's just a few. The vast majority of what we do from a data protection standpoint, from a platform standpoint remains the same even with this next generation technology upon us. And so I encourage that.

Jon Herstein (00:51:41):
Well, can I say one other thing about the policy? Yeah, absolutely. I was having a conversation with a CIO the other day, and this exact topic came up and he said, can you imagine if you look back at what someone's internet usage policy, if they'd written it in the first two years of the internet, if you looked at that document today, it would look ridiculous, right?

Jon Allen (00:52:02):
Absolutely. Exactly.

Jon Herstein (00:52:04):
Think about principles over policies, right? What should your principles be?

Jon Allen (00:52:08):
Exactly. Exactly. And I think the other piece, and this is a lesson from what I talked about earlier with the cloud migration, right? Don't assume you're in control. I think that sounds crazy,

(00:52:23):
But any one of us as an institution, an enterprise, you aren't going to get to decide where the marketplace is headed and the marketplace is headed someplace. And so I joked, right? Oh, if your strategy was, we're not going to the cloud, you lost. Unless you're a three letter agency or something else, you probably lost that bet. Likewise, you need to make sure you understand the marketplace is going to decide how generative AI is used, where it is in solutions. And you need to be ready to adapt your enterprise strategies based off of how the marketplace evolves over the next 12 to 1824 months.

Jon Herstein (00:53:09):
That's a great one. Don't assume that you can control it. That's a great point. We think a lot at box about what I would call the intersection of content and ai. I mean, for us, this is the crux of it. You've got unstructured content, you've got the power of generative ai. Those things coming together is what's really interesting to us. And I'm curious, as you look forward, any predictions or thoughts about how that intersection develops and where the benefits may be for enterprises?

Jon Allen (00:53:39):
We've entered a time where we just create content and data ATS such a rapid pace. And I think one of the things that most organizations have struggled with in this time period where storage is cheap,

Jon Herstein (00:53:53):
Data

Jon Allen (00:53:53):
Is plentiful, content is plentiful. How do we get value out of it? How do we manage it? How do we justify that? It even exists in a lot of cases. And so for me, that's the piece that really looking forward, I think is going to be empowering. A few minutes ago, you were talking about metadata. How many of us in organizations are doing an amazing job of tagging, of properly retaining or not retaining all these different things? Those are the things that should be enabled, right? If that's a tedious task that can be automated, a tedious task that's automated, and then we're going to operate in a world where we're focused on the right things, we're getting the right information in a timely manner. I mean, I think that's the other thing we have to acknowledge. Why are we in the generation that we're in right now with ai? AI is not new. The algorithms, the ideas, natural language processing, all these things have existed for decades. What's changed? Compute

(00:55:02):
Enablement via compute. And so that has put us into this spot today because it's now possible. Could you imagine if you were hitting the chat bot and yeah, okay, I'm going to come back after lunch to get the response, how long it was going to take. And realistically, probably five, six years ago, that's where we stood. And so we have to acknowledge that we're here because of the acceleration of compute. And I think we're not done seeing that yet. And as a result, the things like content management, the intersections and value, I'm not sure we can imagine what's possible today. Now, I'll layer on top of that with my other hat, and I'm not sure I want to know what will be possible because from a privacy and cybersecurity standpoint, it probably scares me to death.

Jon Herstein (00:55:46):
And it's again, going back to finding that balance between taking advantage of the innovation but doing securely. Yeah, that absolutely. That's why your job is probably so interesting and maybe a little scary

Jon Allen (00:55:56):
Sometimes. Yeah, no doubt. No doubt. John,

Jon Herstein (00:56:00):
You made a great point about metadata and value. We see that so much of the demand that we're seeing from our customer base now is how can I apply these AI capabilities to actually getting value from the content that I've been storing with you for years? And if you look at the failure of enterprise content management, in many, many cases, it was exactly this point of metadata is tedious, it's error prone, it's human driven. It wasn't done very well. No one wanted to do it, and so it wasn't done properly. And then it's pretty useless. And now we've got the opportunity to change that in an automated way,

Jon Allen (00:56:36):
Right? Exactly. And I think we're seeing the tools now, getting to the point that it's starting, what I would call the start of this, right? I mean, I think it's going to evolve very quickly over the next couple of years. The idea that you even have to think about what your content is and how it should be classified or who should have access to it. Why do we put that on our users? Why aren't we automating so much more of this in the background? Wearing my cyber hat, obviously user security, user provisioning, end of the day, it should be down to the data level. That's strong security. But we've never done that because it was so tedious. This enables that next generation, cyber technologies, cyber platforms, content platforms, that's just going to be an automatic, and we're starting to see that some with what you guys are doing already. And I expect that we're going to see that across the marketplace as this becomes more approachable and cost effective.

Jon Herstein (00:57:40):
Totally, totally agree. And we're seeing the same thing. And actually it's a nice pivot to kind of where I wanted to wrap things was talking about something that's really near and dear to my heart in my customer success role. I think a lot about these three pillars of value, culture, and experience. So how do you ensure that customers, in your case, stakeholders, end users, are getting value from the things that you're doing? How do you think about the impact to culture of these new technologies and innovations? And then what's the actual experience that end users have? So is there a lot wrapped in there, but maybe three questions. What do you see as the critical path to value realization of these technologies. So when you think about AI and what it can bring, how do you get to actually value being realized by the organization?

Jon Allen (00:58:25):
I think the first piece is making sure that our user base understands them. I mean, I can't necessarily come up with these are the problems that are best addressed by ai. Because the people that understand those problems the best are down in the organization. And so they have to have the understanding of what's possible to then be able to float up to us, Hey, this is something that we think may be practical. We're starting to see that. We're starting to have those people reach out and say, we're thinking it'd be really cool if in this department we had a chat bot, an agent that was capable of doing this for our students. Wow, okay. Let's start talking through that. Let's start doing the ideation over what's possible there. That's the first piece, because candidate, we as leaders, we don't see all those things down underneath necessarily, right? We're not the boots on the ground. And so we have to enable our folks that are down deep in the organization to know this is something I should float up because it's a possibility. And I think this is another situation where there has to be no bad ideas.

(00:59:38):
You have to be willing to listen to everything because the thing that may seem the most outlandish may be the greatest opportunity going forward. And so that one's really critical.

Jon Herstein (00:59:50):
Yeah, I think the question is almost what's the craziest thing you could think of doing with this technology? And maybe it's not that great,

Jon Allen (00:59:56):
Maybe it's not right. Maybe.

Jon Herstein (00:59:58):
Yeah.

Jon Allen (00:59:59):
And so that's a big piece for me.

Jon Herstein (01:00:03):
We very much have embraced that, and I think what's happening is these things are going to happen in parallel. There's going to be top down initiative for you say, we want to leverage AI for this, and that would come from you and your team at a leadership level,

Jon Allen (01:00:14):
But

Jon Herstein (01:00:14):
At the same time, you want users bubbling up and saying, I'm actually using AI in my day to day right now to help me do this. And that kind of bubbles up, and those things maybe meet in the middle, but I think you're going to have to have both happening simultaneously.

Jon Allen (01:00:29):
Yeah, I would agree. I think that's absolutely key. And frankly, the top down stuff in some ways comes from our solution providers as well, right? Because they're enabling it and saying, Hey, AI is now doing X. And you're like, oh, okay. Well, that's a piece we didn't expect, but that's going to be really helpful.

Jon Herstein (01:00:45):
Well, certainly your vendors and partners should be bringing that to you and even saying, here's what some of our more leading edge customers are doing with these capabilities, and you should be too. Right?

Jon Allen (01:00:54):
Yeah.

Jon Herstein (01:00:55):
So then we move on to change management and adoption and just sort of what do you think are some of the most criteria to think about in terms of actually getting people to adopt these things, which you mentioned earlier.

Jon Allen (01:01:05):
It really goes to making 'em aware and making it approachable. Don't make it feel like this is a half to don't make it feel like this is the robots coming in and taking over the world. The thing that we've seen that's the most powerful in this space is those, some use cases

(01:01:26):
Go in, whether it be in a departmental meeting or maybe it's a room of leaders, but nobody wants to just look at slides saying, Hey, this is what it can do and this is how it can do. Pull up a box hub and say, look, we threw all the documentation for our high performance computing environment into this hub. We've now allowed our user base to be able to get access to it. And the things that used to take them a ton of time to search through and find the right thing, they can ask the question of what they're trying to accomplish and get the right answer from trusted sources almost immediately. And you see the light bulb start to click because people are like, I've got something similar in my department. Hey, we do this thing. So it's this quick enablement where all of a sudden they see it in practice and they start that ideation process and they start thinking how they can apply it. And I think that's so critical, and it makes it approachable when they see that. They're like, well, that's not hard. This isn't PhD level rocket science. This is something that I could sit in my office and knock out in a half hour, and now I know how to use it.

Jon Herstein (01:02:37):
Well even show them, here's the prompt that I'm using, and here here's the documents. One reason to drive this. So maybe two pragmatic tips for me doing exactly that at box where we have a Friday lunch every single week where it's an all company kind of all hands meeting with business updates and so forth. For the last few months, we have been highlighting every single week a user-driven AI use case. So these are round up. It could be a sales rep, it could be a marketer, it could be a CS m, and they're self nominated to kind of go into a submission system. And we're picking every week one of those to highlight. And we'll have that person from that part of the business come on and demo what they've done and explain how they've done it. And to your point, it opens up eyes across the business to say, oh, I've got a similar thing that I do either can I use that one or can I build one that's like it? That's one thing. And the other thing, and I'm sure you all do this, is hackathons. So spend aside time to have people go do these things outside of the normal bay job.

Jon Allen (01:03:40):
Yeah, no, those are great examples. And I think it's interesting as you talk about that, it makes me think of one thing though, where they're similar. We need to do a good job of making sure to not have sprawl, raw technical debt. And when something's new, what's the most likely thing to happen? We're going to get in those situations where you have 20 agents doing very similar things that could be generalized. And so I think that's another piece of the puzzle we haven't even really talked about yet, is how do we, I won't use the word govern because that's an ugly nasty word, but how do we enable a catalog of agents across the enterprise in such a way that people say, Hey, that's exactly what I need. I'm just going to use this instead of creating my own. And those are things that I think are going to be really crucial moving forward, because what we don't want is to have 50 agents that are doing document summary against the institutional strategic plan. That's not helpful.

Jon Herstein (01:04:44):
Yeah. I think the sweet spot is going to be how do you establish just enough governance to prevent that kind of sprawl, but not so much governance that people say it's just not worth even trying. Right,

Jon Allen (01:04:56):
Exactly. Right. Enablement. That's the key.

Jon Herstein (01:05:00):
Yeah. Let me end with this, which is, what is your success criteria for a successful AI project, whatever that looks like, and when do you consider something ready for primetime?

Jon Allen (01:05:14):
So for me, I mean, it goes back to this mantra that I kind of carry through my career. People joke, cybersecurity was the office of, no, I always want to be the office of how. And so for me, enablement and transparent, and at the end of the day, it's got to be the right tool for the right thing. I mean, sometimes when you're searching through a suite of documents using GRE is what you need. Don't think that just because I have this hammer, I need to use this hammer across everything I'm trying to do every day. And so I think it's looking through that lens of is this truly an enabler? Is it the best solution for what we're trying to accomplish? And how disruptive is it, right? I mean, is this something that becomes really transparent and second nature, or is this something that's just going to be another pudge, right, that people are going to be frustrated with and things like that well executed regardless, whether it's AI or any technology, it should meet those two criteria. Or you have to ask yourself, why are we doing this? Because it's not achieving continuous improvement. It's not achieving strategic alignment if it's not meeting those two criteria at a baseline.

Jon Herstein (01:06:28):
That is great, John, we'll end it there. I thank you so much. This has been such a fun conversation for me, hopefully for you.

Jon Allen (01:06:36):
Thank you. Appreciate the opportunity and look forward to see what's coming next with box.

Jon Herstein (01:06:41):
Thanks for tuning into the AI first podcast, where we go beyond the buzz and into the real conversations shaping the future of work. If today's discussion helped you rethink how your organization can lead with ai, be sure to subscribe and share this episode with fellow tech leaders. Until next time, keep challenging assumptions, stay curious and lead boldly into the AI first era.