AI is changing how we work, but the real breakthroughs come when organizations rethink their entire foundation.
This is AI-First, where Box Chief Customer Officer Jon Herstein talks with the CIOs and tech leaders building smarter, faster, more adaptive organizations. These aren’t surface-level conversations, and AI-first isn’t just hype. This is where customer success meets IT leadership, and where experience, culture, and value converge.
If you’re leading digital strategy, IT, or transformation efforts, this show will help you take meaningful steps from AI-aware to AI-first.
Jake McCoy (00:00):
Earn the right to automate. So don't use AI for the sake of ai. Like anything in business, I think you have to start with the outcome. What are we trying to achieve, and let's work backwards for that. I think where you start to get lost with this quite quickly is just, oh, we've got this new tool. So let me put this new tool on everything. That's not really the proper way to do this. You've got to work backwards.
Jon Herstein (00:22):
This is the AI first podcast hosted by me, Jon Herstein, chief Customer Officer at Box. Join me for real conversations with CIOs and tech leaders about re-imagining work with the power of content and intelligence, and putting AI at the core of enterprise transformation. Hello everyone and welcome to the AI First podcast where we explore how enterprise leaders are applying artificial intelligence in real meaningful ways, not just as technology, but as a catalyst for change. I'm Jon Stein, chief customer Officer at Box. In today's conversation, I'm joined by Jake McCoy, chief Operating Officer at RWS Global. In this episode, we'll discuss defining AI principles for your business, balancing AI for operational improvements, and enhancing the creative process and how to responsibly deploy AI for everyone. Welcome, Jake. Very excited for our conversation today.
Jake McCoy (01:17):
Good morning. How's it going? Great to be with you, Jon .
Jon Herstein (01:20):
It's going great here. How are you?
Jake McCoy (01:21):
Doing well, doing well.
Jon Herstein (01:22):
We're very excited to have you here and have you on the podcast, and I would love to start with a little bit about you and also RWS Global, which I think for a lot of people may not be a household name, but certainly powers a lot of names that people will be very, very familiar with. So can you tell us a bit about RWS and yourself?
Jake McCoy (01:38):
So my name's Jake McCoy. I'm the Chief Operating Officer for RWS Global. We are a award-winning creator of destination experiences and sports all around the globe. So if you haven't heard of us, you've definitely heard of some of our clients before. So we operate entertainment on over 45 different cruise ships with clients like Holland America line over 135 different theme parks around the world, many of those LEGOLAND parks and all sorts of different theme park attractions, both international and domestic, as well as sporting events like the Paris Olympics or FIFA World Cup.
Jon Herstein (02:17):
Wow, okay. Yes, we've definitely heard of those. And your role as Chief Operating Officer, what is the scope of that in an organization like RWS?
Jake McCoy (02:26):
So my role here is very unique. I've been with the company for a long time, actually started here as a production manager over 12 years ago. So I'm very familiar with the delivery and production side of our business, but I oversee all of our global production. So at RWS we do work on land at sea and in sports. So I oversee all of those land, sea sports production teams as well as all of our back of house operations, IT people and culture and business operations, which encompasses all of our systems and technology.
Jon Herstein (02:59):
I think your role is pretty interesting and maybe somewhat unique in that you are really at the intersection of creativity and execution. So I would imagine you've got to have your brain in both of those camps simultaneously. So I'm curious if, well one, is that true, and then if it is, what gets you sort of most excited about that intersection today?
Jake McCoy (03:17):
Today? Absolutely. So I would say this is actually very much where my career has kind of always lived at this intersection of creative ambition and really operational reality. So I came up through production, as I'd mentioned, actually started in this industry as a stage manager. But in this world of live execution, public failure is not an option. It's very unforgiving. We can't screw up in what we do. So in my world, you learn very, very quickly that vision without structure just doesn't scale. It doesn't work. So for me right now, I think we're at this really, really incredible and amazing moment, which is creativity can now come together and flourish in a whole new way than ever before. With operational intelligence in this really new, clever, smart way that brings us to a whole new era from an enterprise standpoint. So we are not just being able to do things slightly faster or slightly more efficient, but it's being able to do things that we never could have even dreamt possible before.
Jon Herstein (04:22):
This is a very common theme that I'm hearing certainly in conversations with customers and on the podcast, is this idea, and we'll talk obviously about AI very shortly here, but the idea that it's not just about operational efficiency, doing things faster, cheaper, better perhaps, but also unlocking capabilities that you just couldn't even tackle before, and that sounds like exactly what you're seeing.
Jake McCoy (04:43):
Absolutely, 100%. It's going to be able to give us time to work on things that we couldn't even imagine would've been possible in our workday previously.
Jon Herstein (04:51):
So do you feel that today, given the current capabilities of ai, that we're at the point where it's a requirement, it's necessary in the business versus something that like, yeah, we can take it or leave, it doesn't really matter.
Jake McCoy (05:03):
Yes, at our scale right now, it's not a pipe dream. This isn't something we're talking about. Someday in the future, we're going to get to this point right now, and in today's day and age, this is basic operational hygiene as far as I'm concerned. So when you're onboarding thousands of people seasonally when we're operating across multiple jurisdictions in different countries, delivering these experiences where ultimately for me, safety, ip, creative quality, the risk of actually not doing this is too high. So for me, AI becomes necessary when the cost of inconsistency exceeds the cost of the change, and we were at that moment, we've crossed that line.
Jon Herstein (05:46):
That's a really interesting sort of phrasing. Can you say more about that? The cost of inconsistency exceeds the cost of change, say more, and how do you get your teams thinking about that?
Jake McCoy (05:53):
At our core, really what we're trying to do here is not just deploy AI for sake of AI and say, Hey, we're doing that, but it's ultimately we want to figure out what is the outcome? What are the things, what are the blockers? What are those teams trying to achieve, and how can we improve on those workflows? How can we really zoom in and get laser focused on making sure that we are doing the things that really matter and that make a difference? Ultimately for us, where we are a project-based business and we're a service provider, consistency, quality, safety, all of these things are really, really important. So for us to be able to take the things that we know we can standardize, obviously we're not a cookie cutter business. Every single client that we work with is different. Every project is different. That's a given, but our process remains. The infrastructure that we're using remains consistent. If we can stamp that and build that into the system so that we're just able to print that consistency time and time again from an operational standpoint, that's really where we're at right now. We don't have the time or the energy right now to recreate the wheel every single time from an infrastructure standpoint. So we're now at this point where we need that consistency. We need that model baked into our systems. That's where AI came into play for us,
Jon Herstein (07:11):
And we'll touch on what that means from a creativity kind of side of things, because there's probably a different lens for how you think about the use of AI on the creative side, but certainly in terms of that operational consistency, incredibly helpful. Now, where do you start? Is it on the creativity side, the operation side, risk reduction, or is there some other domain that you're focusing on to say AI can really be an assistant here?
Jake McCoy (07:31):
So we start with operations, not because creativity is any less important, but ultimately it's because ops is where that friction is going to hide. So if we reduce time spent searching, sending, validating, checking, reconciling, creatives ultimately get more time to create, that's the point. That's what we're after in all of this. So box AI helps us kind of surface the right information at the right moment from decades and decades of content without having to ask those teams to become data scientists of our content.
Jon Herstein (08:07):
Can you talk a little bit about that? So you mentioned decades worth of content. You've got all this production data, everything you've ever done, you've got stored most probably in unstructured files. But can you talk a little bit about what that problem is and how you're tackling it?
Jake McCoy (08:22):
So for us, that problem is the heart of it, and where we're going with it is we're taking what was previously the approach of content management. That's how everybody approached us. Previously, you had a file for something, you put it into a folder, you had a nice stack of organized folders that for a long time was kind of enough. We are now to the point where if you continue to do that time and time again with the amount of files and the volume of assets we're talking about for these productions and experiences, it doesn't become scalable over time. It becomes impossible for us to find something from a show that we've done five years ago when we've done so much content after that point. So the key for us is being able to have your data ready for AI is a key and important point of this. That metadata piece, which I'll kind of get more into it a little bit here, but by going through and being able to access and do the right type of tagging protocol, it gives you access to that in a whole new way so that we can get to really what we want is that natural language ability to say, I'm looking for this. Type it into the search bar and have box, do that heavy lifting for us and bring that content back to me.
Jon Herstein (09:31):
Are there processes that feel to you like just such obvious fits for the capabilities of AI that you just haven maybe haven't gotten around to tackling yet? You have a laundry list of backlog of here's all the stuff we want to go do. What shows up there?
Jake McCoy (09:44):
So we do a lot of work with cruise ship clients. Like I had mentioned, marine compliance is one of the strictest in the industry, so we need a very, very buttoned up onboarding process. One of the things that we're looking at right now that feels very AI obvious, but we're not fully there in a deployment standpoint, is working autonomous AI agents into that workflow, which would allow you to be able to run compliance procedures and go through and check those documents, give a response back to the individual who's working on that and give a clear or no clear and a confidence score. So that is something for us that feels almost inevitable at this point. It's a process right now that is requiring a lot of human time. There's an element of potentially human error in that mix right now, and there's value in just having that one step alone, let alone that one step built into an entire workflow where that's just a compliance box that we're ticking as we move through a fully automated workflow.
Jon Herstein (10:49):
It sort of goes to the point that it's not just about making one part of a workflow more efficient, but actually rethinking the entire thing because you've got capabilities that you just didn't have before.
Jake McCoy (10:57):
Exactly.
Jon Herstein (10:57):
What are the implications of, let's say you get this wrong in the case of an employee onboarding and someone who should be clear isn't or vice versa, what's the business impact of that?
Jake McCoy (11:06):
Well, ultimately right now, it happens already in the human process. An example of that might be that in a six page medical form that's written in a different language, on page four, there was an MD that needed to sign and he didn't sign in the right color ink or the page is left blank or wasn't a visible signature. So there's already a human component to that now where that might get missed. So there's a chance that we're actually closing the gap on that risk, but in the event that it doesn't get right, our stance is no matter what, anytime we are putting AI into the mix, it doesn't matter whether or not it's agent or any use of ai, it's immediately followed by human step there. So we believe that over time this is going to get more and more accurate and it will be something that we can continue to rely on and build trust as we are training those models and building those agents out to be fully functional out in the field until we're there. It's going to be a lot of beta testing and a lot of human intervention until that moment, but we believe for operational tasks, onboarding is a really, really great place to start.
Jon Herstein (12:08):
And I also love the fact that in this case, onboarding is quite literal, right? You're talking about getting people onto ships. We talk about employee onboarding all the time, but it's a very literal thing for you. That's great. Now, I would assume, at least for now, that from an accountability perspective, whoever's responsible for that process day, the human is still responsible tomorrow with ai.
Jake McCoy (12:27):
Absolutely. Absolutely. Yeah. Yeah. We would not change that. And again, that's a core part of our principle. So much of us with ai, it's about this human element, especially when it comes to the creativity piece. Human creativity has to lead the way, and even on the operational sense, the humans are still the ones who are driving the bus here that AI is helping us do some of that lifting in the background.
Jon Herstein (12:49):
So I'd actually love to go there now, which is this idea of human creativity leading the way, which I think you've written down explicitly as one of your core AI principles.
Jake McCoy (12:57):
Is that right? Yeah, absolutely. Yeah. I'll jump into this. So this is such an important point for us. So much of what we do is creative. That's why we're getting hired by our clients to be creative, to be imaginative, and to put these incredible experiences out into the world. So if we're reliant on AI to do it for us, then we're not doing our job. So we've right off the bat said, Nope. That is not a part of our philosophy. That is not how AI is going to function at RWS. So AI is not allowed to originate story tone, emotion, intent, none of that. Those are all human jobs. Where it excels is in removing the noise around that creation, things like version control, compliance checks, repetitive validation, sharing, collaboration comments, feedbacks, approvals. That's actually where really the bulk of the work sits in a creative process, believe it or not. So we really need to make sure that that is as tight and efficient as possible as we can get that to be so that those creatives who are working on those shows, who are creating those scripts, who are doing those incredible designs, who are building those characters, we want them to have more time being human and more time being creative and less time worrying about versions and sharing and collaboration, and how are we getting this file from A to B? That's the part that we can really
Jon Herstein (14:24):
Streamline. It feels like having such a bright and explicit line about where you will use AI and where you won't has got to be comforting for the creative talent and the folks who do that sort of work. How is it received by people and does it make people feel more comfortable?
Jake McCoy (14:38):
Yes, I think it makes people feel comfortable. I think it makes people also understand where the boundaries are, and as far as what we believe as a company, it's not to say that every company is going to believe the same thing. So I think it's comforting a team member who's working on these projects to understand here's where it's safe to play and here's maybe where it's not. I also think from a comfortability standpoint, it's very comforting for our clients to know that when they're hiring RWS, humans are the ones who are doing that work.
Jon Herstein (15:08):
You could imagine a real concern arising if they felt like, Hey, this script came from ai, not from the people that we just paid to produce the script for,
Jake McCoy (15:17):
Which is why it is honestly just not even a consideration for us on those creative parts, but it is so core to what we're trying to do on the operational side of the business because there are major efficiencies we can get by really bringing AI into the enterprise.
Jon Herstein (15:31):
Can you talk a little bit about how on this then? So you've got a clear policy or principle in this case that probably then turns into policies. How do you educate people about those? How do you enforce those? What does it look like operationally?
Jake McCoy (15:43):
So we found the biggest thing with DES is communication. I think there's a lot of companies that I've looked at that have seen that have had quite scary approaches to ai. It's very much almost creates a little bit of a fear culture of coming out and saying like, okay, here's this really scary policy and here's what you can do. Here's what you can't do, and if you break the rules and you're going to get fired, it's like, okay, great. That's how a lot of people came out with their AI policies in the beginning, and it's because we went from one extreme to the other. We had no existence of this in our enterprise at all. And then almost overnight after the chat PT moment, it was everywhere. AI was just everywhere, and we had IT directors and people who were in roles like mine really concerned about this.
(16:25):
You wanted to make sure that we remain secure and we remain compliant, and all of those things. So point number one was making sure people understood the why. It's not just coming out and saying, okay, here's a policy and this is what it is and tops down and like it or not, it's understanding that we socialize these policies. Before we launched them, we worked with a couple, we found some AI champions, if you will, around the business in different pockets doing different types of tasks. They met regularly and worked with it after the AI philosophy came out before we launched the policy, and we workshop that policy with them and said, what are you trying to do? Where can we put up some S guardrails and where can we not and where is it sort of not necessary because we're going to bake it into the process?
(17:11):
So that's been a really, really productive piece, which was actually getting people bought in on the creation of those policies to start. Of course, that's done in a safe way once we've gone through and made sure that we have all the checks that we need to from a governance standpoint. But then after that, once you've got your ground rules, then it's getting people bought into the actual concept of this and the idea of it. And that's really where the change management, the more softer piece on behavioral change, and again, it's communication and collaboration. It's not just deploying a tool and saying, here's a new tool, everybody needs to use this tomorrow. It's again, getting in there with those teams. What do they need? What are they working on? What would they like to do? What are their blockers and how can we help them? That's what we're trying to do.
Jon Herstein (17:58):
I love that approach beginning with the idea of finding the champions and the business people who are maybe a little bit more gung-ho about this, a little more progressive in their thinking about it and making them part of the process as opposed to saying some central team has dictated the rules and everyone's going to follow them. It feels more acceptable to the employee.
Jake McCoy (18:14):
Absolutely. And it promotes such a positive culture around this as well, right? Because it doesn't feel like it's something being forced upon people. It actually feels like a benefit and something people are getting access to and that they're excited about.
Jon Herstein (18:27):
So we covered a lot in the creative side. I want to talk a little bit more on the efficiency side, and you've got a phrase efficiency where it counts as sort of a core pillar. So I'd love to understand a little bit more about what does that mean to you and how does it sort of show up in the organization?
Jake McCoy (18:42):
Absolutely. So efficiency where accounts again goes so much to where are we prioritizing, right? I'll say what we don't want to do is come out and say, oh, well, Jake had an idea of something that he thinks would be more efficient, so I'm going to go ahead and roll that out across the whole business. Again, it feels a little bit backwards, and ultimately it might be that there's a particular team who's facing an issue today, this afternoon or this morning even, and we could go ahead and help them or potentially unlock a new door or give them access to a new data stream or work on a new workflow together, and that one particular thing that team member came up with that could have an impact times hundreds or thousands of data points after the fact. So I think it's listening to our teams, it's having your ears up and your eyes open and understanding what do people need across the business, and then being able to have someone in a role who can analyze that and say, yes, this one particular thing that bubbled up through a lab or through an idea suggestion, this actually does make a huge difference to the business and would make an impact for us.
(19:51):
That's something we focus on. So I'll give you an example for us, a big one was legal.
(19:56):
So before box, you had hiring managers submitting a contract request with a Scope legal kind of manually puts that all into a contract, sends that out over DocuSign, tracks it, files it in Dropbox or wherever depending on the region. It was we're all over the place and also depending on the region, that workflow was slightly different. So after Box, and I'm kind of also getting a little bit into what the future looks like here for us, that hiring manager submits a contract request. DocGen puts that into an agreement for legal review, legal approves it sent out via box sign, automatically filed, all built in and part of a pre-programmed workflow with Automate. So we're really excited to be able to get there and we know that that one thing which actually came to us from that department that wasn't a tops down rollout that bubbled up from the department that would impact thousands and thousands of touchpoints, we're contracting 8,000 performers a year. So at the very least, that's 8,000 contracts to start plus the offer letters and the associated documents with that. So that would be a really incredible and powerful moment. If we can get that one process fixed, that frees up a whole lot of resource for us to be able to continue to take on new projects and to spend more time ultimately with our clients, with our guests, and with our team.
Jon Herstein (21:21):
That's a great example. What I particularly like about that one is that it came from the business, in this case, the legal department, as opposed to you all saying, Hey, there's a problem over here. We're going to fix it for you. They presumably identified the problem themselves, said this could be more efficient, and then did they come to you with here's how we want to fix it, or did they build something out and bring it? What actually happened?
Jake McCoy (21:40):
So usually what I say in that instance is because a lot of the expertise and the knowledge around these features sits with me and my team. So what I'm kind of asking for the business at this point is to come to us with the problem, come to us with the problem statement and say, here's what I'm trying to do and I have an issue with this, and then allow us to do a little bit of diagnosis and present back to you and say, here's what we can do.
Jon Herstein (22:03):
Makes sense. What's the rough balance today of how many of these initiatives are coming from ground up ideas coming from the business or other departments and how much is more top down? Is there a way to put a,
Jake McCoy (22:14):
I would say it's probably about 50 50, but the interesting part about the tops down is a lot of what's actually coming from the Top Sound group is really just things that are coming from other teams. So each team is kind of meeting individually with this, but what we're finding is we've got three different production divisions, like I mentioned, Lance Sports, should it be working on something with land and then actually see is completely unaware of that conversation we're having over here, but we're realizing, you know what? We need to have a conversation with this team and say, Hey, this thing we're doing over here, we're also going to do it over here. So some of the tops down is really just taking ideas that have bubbled up another area of the business and spreading them much wider.
Jon Herstein (22:53):
We've seen a similar thing. We do a lot of internal showcasing and where one department or team is using box AI for some use case, and we'll showcase it at one of our weekly Friday lunches, and then someone else will say, huh, they're doing that in recruiting. But that process is actually very familiar and similar to what we do over here in support, for example. And so we're definitely seeing a lot of that's sort of organic growth of ideas.
Jake McCoy (23:16):
Yeah, it's really exciting and I think across the business, if you're getting your teams involved in this process, that's so important because just sitting at our desks as leaders, there's only so much we're going to be privy to as far as the nuance of this day-to-day work and we really need with this stuff. You're not just rolling out a new tool. It's a completely different approach to the way we've dealt with software or SaaS tools over the past 10 plus years or so. We're not just sitting here and doing a deployment and rolling it out. We're really changing the mindset and the way that people are working, and you have to do that in a collaborative fashion or it doesn't work,
Jon Herstein (23:53):
And it sort of fits under this banner of the idea of AI for everyone, which I think is also sort of a core philosophy that you all have. Do you find that there are some groups or teams or types of people who adopt that concept quicker than others? And I'm thinking about this from standpoint of advice for others who may be watching or listening to this podcast where maybe they're not seeing as much of that. Have you found things that work to sort of unlock that idea in the business?
Jake McCoy (24:19):
So for us, I think the big thing again is it's establishing that playing field in the ground rules at the outset. AI for everyone doesn't mean that we have AI without rules. It means that we've got shared access and shared responsibility. So it's set up the ground rules, kind of build your sandbox. Then you need to figure out what are those blockers, what are the opportunities with the team? That's where maybe you're doing a lab series, maybe you're doing, we're actually working hand in hand with box right now and doing a little bit of a roadshow around the business. Those types of things are really great to get the word out there, get excitement going, get your leaders behind this and feel excited. Everybody needs to get involved in this. Again, it can't just sit at one level of your organization. It can't just be your subject matter experts.
(25:09):
It can't just be your management tier. It's got to be all areas of the organization, get leadership bought in. And what I've really found is that this wasn't so much split between teams. Everybody's actually been excited about this, and it's really about making sure that those core principles are adhered to. And once people realize like, okay, here's the world that I've got. Here's the tools that I've got, this is safe, and I'm starting to actually see things change day to day. That's where the whole mindset shifts and you get people not just excited about it, but keen to participate and to be part of this process and to say, how can we keep making this better every day and how do we be part of helping us become more efficient?
Jon Herstein (25:52):
That's a great approach. Now, you mentioned leadership a couple times, so I am curious a little bit on the leadership side. So it sounds like the teams have embraced, that's great. It sounds like leadership's on board, but were there leadership muscles that you had to develop that didn't exist before because people had to think about this differently? Was there persuasion you had to do with other leaders? Obviously you're all in, but let's talk a little bit about the leadership side of this.
Jake McCoy (26:13):
So I think the first thing was, for me personally, it was I couldn't stand behind anything like this, and this is just a core belief of how I am at work. I don't stand behind anything that I'm not very fully informed with. So as you know, Jon , and I've been going around the world, I've been going to BoxWorks events. I've been spending a lot of time learning about tools, not just within Box, but across the entire market sector right now. So I've been spending a lot of time making sure that I understand what's going on out in the world, both inside our business and without. So I think that's important. You could have somebody at the top who's bought into this. And then for me, a lot of this was making sure that we can maintain a level of discipline without damaging trust. So again, we don't want to come out too scary in the beginning with this.
(27:01):
We want to make sure it feels collaborative. I had to make sure I was comfortable saying no, because there's a lot of other things that we've determined in this process aren't great or aren't particularly where we're comfortable from a risk profile standpoint. So there are things that we'll have to say, Hey, we really appreciate your suggestion here, but actually we're not going to move forward with that tool because of why. I think giving people the why is the most important part of all of this, and I cannot sort of reinforce that enough, but just saying, yes, we believe in this, or we don't believe in this. It becomes very confusing and difficult for people unless if they can understand how does that match back up to your philosophy? Why can I do this and I can't do that, or why are we saying we're using this tool for this and not for that? So I think it's not jumping straight to really judgment heavy decisions, making sure that we understand that we get in there with the teams and we've been part of that process with them and always reinforce the why so that people understand the logic behind your decision.
Jon Herstein (28:04):
That makes a lot of sense. Now, you touched on trust and you also touched on governance without actually saying governance. So I'm going to say it, I'm going to push you on this a little bit in terms of what is your approach to governance, and part of it is just the process side, but the other is from an accountability and responsibility side, who feels like they're responsible for governance around these tools and capabilities?
Jake McCoy (28:24):
So I think AI for everyone, not AI without rules, it just means we've got this shared access built on this foundation. So for us, single source of truth is at the bed of it. We've got all of our content in one place, we have IT structured and clean, and we're maintaining good data hygiene, everybody's job to make sure your content is in the right spot. If files are sitting on your desktop or they're sitting on a USB flash drive, they're not in the single source of truth. So everybody's responsible for making sure that we've got the content in the collaboration environment. And then practically what we're doing is really three things. First, it's separating the access from authority. So many people, and actually with Box, everybody can use AI to search, to summarize, to understand the content that's in there, far fewer depending on your role and your function and what you're trying to do can use it to generate, approve, operationalize, anything that has a risk element with it.
(29:28):
We've taken extra care there, but somebody who needs to go ahead and pull up a 50 page highly technical report and get a quick two paragraph summary on that, we've realized that there's a very low risk profile to that and everybody should be able to get a summary. But who's using generative versus who's using summary tools? That's something you've got to figure out. So that's number one. You separate that access from authority. Number two, you got to figure out and embed those guardrails into workflows, not necessarily just into your training. So if there is something that you believe as an exert organization, it's really important, we don't want this to happen, then build it into the model. Figure out a way to engineer that. So that is only allowed in the way that you want. There's a lot of control that you have in this world.
(30:15):
So as long as you're using enterprise tools, you've got all of the guardrails and the controls that you need. So make sure it's built in. If, again, this goes back to the mindset thing. If people think that they have to make all of these independent decisions themselves, that's where you start to lose confidence in people. If they're sitting there saying, well, here's my 25 page policy, and then they've told me I think I can do this, but I'm not sure, and they have to do their own analysis every time of what's safe, not safe, that's where it starts to get confusing. So build it into the workflow, not into your training, and then third, you standardize the principle, not the prompt. So it's not about micromanaging what people are asking into ai, it's just being very explicit about what AI should never even do in the first place. Right? We've already talked about it's never creating shows for us. It is never handling anything as far as creative generation. It's not replacing creative judgment, it's not making a safety call. It's not originating a meeting. So we've defined what AI can and can do for us, and then again, we've built that in. So ultimately the result for us is that you've got all sorts of different people across the business, casting directors, technicians, producers, executives, everybody benefits from that same intelligence layer,
(31:40):
But the controls are appropriate to their role.
Jon Herstein (31:43):
It's very, very thoughtful. I want to touch on just one thing back on guardrails, and this is just a little bit of a guess on my part, but I would assume that you couldn't know all the guardrails you'd need to put in place right upfront. So I assume that's evolved and iterated over time, or did you have it all dialed in right from the start?
Jake McCoy (31:59):
No, with ai, I think it's impossible to know everything upfront, especially because it's changing every day. But I would say we had a really, really good start to it, and then every day is about tweaking and not just with your guardrails, but also with your quality of your outputs as well. So one of the things that I enjoy quite a bit about the AI studio inbox anyways is you've got a little bit of a world to play in first before you go out and launch or deploy those models. There's quite a bit that you can do just in your configuration or even in changing the actual model itself and then doing test prompts to figure out how do particular models respond to different things. So there's a lot that you can do in the environment before you've gone live in front of 300 people.
Jon Herstein (32:44):
We found that to be a very, very important requirement or capability that customers are asking for is the ability to test, right? Test with sample data, test with prompts, try different models, and I think what we're going to start to see when we get into true workflows is that it won't even be a single model. It might be we're going to use this model for this part of the process, this other model to maybe go validate what the first model did. I think it's going to get pretty interesting.
Jake McCoy (33:07):
Yeah, we're finding that already because the models themselves have different features and different abilities built in. So the example that I gave earlier with the cruise ship onboarding in order to be able to read and analyze those PDFs, that's a very particular OCR piece. So we've picked a model that is quite specific to that, and we know that it's very good on its compliance. It doesn't necessarily mean that I would choose that model for every agent in my enterprise, but I've landed on that one for that particular task.
Jon Herstein (33:36):
Now I want to shift gears to challenges and lessons learned, which is I think maybe one of the most useful parts of all of our conversations is what can people take away from this? And I'll start with what has been harder than you expected? Was it more data readiness or cultural readiness?
Jake McCoy (33:50):
Data readiness, for sure.
Jon Herstein (33:52):
Okay.
Jake McCoy (33:52):
I think innovation is and has always been a core value of RWS. It is quite literally one of our core values. So innovation is part of our culture in the world of ai, data hygiene is incredibly important, probably more now than ever, and in the enterprise environment, you need really, really tight data hygiene and your data needs to be ready in order for AI to be effective. So for us, cultural readiness wasn't the problem because after the chat GBT moment happened out in the world, everybody was already excited and wanted to use ai. So I didn't have the problem of trying to get people on board. It was actually quite the opposite. What we found is if everybody can access these tools out in their personal life,
(34:37):
The expectation is that I should be able to have access to them at work and probably they should be better than what I can get at home. So we actually had a lot of work to do quite quickly to be able to get ready to meet that expectation of the culture and say, Hey, everybody else is already using these tools to cook dinner, so now they're expecting that they're going to be able to come in and use it for work. So we had some work to do to figure out, okay, all of the things I talked about, the governance, the safety, the data hygiene, the metadata, all of that had to happen in a way that was like, all right, we've got to get this ready because the people are ready for this.
Jon Herstein (35:13):
Any idea how many times you got the question, what's our AI strategy in those early days?
Jake McCoy (35:18):
A lot, I got it enough times that it made me sit down and say, you know what? I need to do this weekend and sit down and write an AI strategy
Jon Herstein (35:25):
And AI principles. Ed, how early did you write your AI philosophy down?
Jake McCoy (35:29):
I had some rough drafts of this probably close to eight or nine months ago, but ultimately, actually a lot of what tweaked it was my different experiences at a couple of the different box events and a few other different industry events that I'm going to, because again, it's irrelevant what I think sitting here in my chair. I need to see what's happening around the rest of the world, what's happening with these tools, what's happening in the different market sectors, what are the expectations of our clients? So there's a lot of work to do. Before it was just let me sit down and put some words to paper. We had to really make sure we did our homework here and got this right.
Jon Herstein (36:07):
So you personally did a lot of learning just through exploration and discovery and going to conferences and reading and all of that, but we also obviously learned through mistakes, and I'm curious, is there anything you can share in terms of an initiative that didn't take off or didn't go the way you wanted that you would say lesson learned, do it differently next time?
Jake McCoy (36:24):
Yeah, I would say early on we had over automated a couple workflows that still needed a lot of human nuance. So I think our lesson there was pretty clear, which is for me, and this is a new little kind of tagline I was giving to it, which is earn the right to automate. So don't use AI for the sake of ai. Like anything in business, I think you have to start with the outcome. What are we trying to achieve and let's work backwards for that. I think where you start to get lost with this quite quickly is just, oh, we've got this new tool, so let me put this new tool on everything. That's not really the proper way to do this. You've got to work backwards. So for us, I think it was just finding the right balance over what works and where do you need those human interventions, and ultimately, most importantly, what is giving you the right outcome. So test it, make sure you feel comfortable with it. Alpha, test it beta, test it before you even launch it after beta. Maybe even go in with a slightly larger group than beta and then go a little bit wider with it. So I think it's as long as you've got a good trusted group in place who are working on these tools, then I think you'll be set up for success there.
Jon Herstein (37:36):
I think we might need to title this episode, earn the Right to Automate. I love that. I love that. If not that, we'll get it in the show notes at least as a takeaway for people. And speaking of takeaways, are there mistakes that you actively try and get others to avoid? So when you're talking to peers, whether inside the company or outside, do you say, Hey, listen, definitely don't do this or definitely do
Jake McCoy (38:00):
That? Yeah, I think the biggest thing that I see is people treating AI like a product rollout instead of an operational change. So this isn't just, Hey, we've got a new tool and we're going to implement it in the same way that you would put out a new SaaS tool. There's a lot of leaders who I think are rushing to deploy these tools before they've actually done the hard work of figuring out how your business actually needs to run differently. First, you have to start there. So if you're just going out and launching pilots and dashboards and all of these new tools without really establishing your single source of truth and getting clear ownership and doing the work with the governance, I call that AI theater, and you can launch and you can make an impressive demo. You can do some incredible things with AI in a short amount of time, but we're talking about things that you are embedding into your enterprise that are going to multiply hundreds of thousands, maybe millions of times that these particular workflows or buttons are going to get clicked.
(39:00):
So it is so important that you've got the time and the quality right, and you've got to make sure that you are treating it like a full on change management process. It's got to be fully holistic. You've got to get the right players involved, and the lesson I share is AI is only going to amplify what already exists for you. So if your content is fragmented, if your process is inconsistent, if accountability's unclear, AI is only going to make all those problems louder. So you've got to do the fundamental work to get your house in order so that you're ready to bring in a tool like this. And then once it's there and you've done the homework and the foundation is right, then it's absolutely incredible. And the sky is the limit really.
Jon Herstein (39:44):
It feels like there's a whole new requirement for curation, curation of content, curation of processes, and doing that work before you just say, oh, go point AI at this problem and have it figure it out, because you're going to amplify any issues or challenges you have in that process or in that content.
Jake McCoy (40:00):
Absolutely. Yep.
Jon Herstein (40:01):
So you ready to pull out your crystal ball? I want to ask you about the future. Absolutely. I'm ready. You're ready to do this? Okay. I won't hold you to it because I'm going to ask you an impossible question, which is sort of the five years out question, which I don't think any of us can really imagine right now. But what would feel negligent to not have fully driven or automated by AI five years from
Jake McCoy (40:22):
Now, searching for information? So if in five years your team is still searching for information, I'm going to say that's a leadership failure because the tools can do that, but you've got to do the homework and we've got to be able to get that right because that is the most basic foundational thing that we should be able to fix asap,
Jon Herstein (40:44):
And that's probably not even a five years thing. That's maybe a one year thing or a six month thing, right?
Jake McCoy (40:48):
Yep.
Jon Herstein (40:49):
Alright. This isn't exactly a prediction, but what's your most controversial take on AI and creativity?
Jake McCoy (40:57):
This is a great question. So I would say my most controversial take is AI is not the threat to creativity, leadership laziness is so AI is not going to dilute originality on its own. What I think really erodes that is when people are using AI as a shortcut to avoid making those hard creative decisions, they're not clarifying intent. They're not investing in their craft. If you're leaving AI to fill a vacuum of taste or vision or accountability, you are not innovating. You've just decided that you're not going to do anymore. You're abdicating. So you can't do that. You have to do the work to be able to get your pillows right, get your foundation right, and understand that AI is not a threat to humans, but we have to figure out how to harness this most incredible tool that we've ever been given by the technology industry.
Jon Herstein (41:51):
So don't be a lazy leader is what I'm hearing from
Jake McCoy (41:53):
You. Exactly. And I would say the real controversy is that if AI replaces creativity in an organization, it's because creativity was never really there to begin with. So strong creative cultures, you're not getting weaker with ai, but you're getting exposed. A weak culture will get exposed with ai if you don't have your data structure that will get exposed with ai, if your governance is wrong, that will become more of a problem for you. So you've got to make sure that ultimately you've got people who are bought in, who are invested, who are spending the time and the energy and the resource to get the foundation. You wouldn't be able to go in and hire someone to build a house for you if you didn't have a foundation. And I think that's really what we're saying. I'm believing these leaders, they need to build this strong, solid foundation, then let your teams come in and build the house, but you've got to get that right.
Jon Herstein (42:44):
Makes perfect sense. And probably applicable not just to ai, but really any new technology that comes along.
Jake McCoy (42:49):
Just
Jon Herstein (42:49):
Think about that. So if another COO called you up or CIO or CTO or anyone in sort of a similar role called you up and said, Jake, I am so far behind, I don't know what to do, what would you say their first move should be?
Jake McCoy (43:02):
I would say single source of truth. Get it all in one place. You got to make that decision. And I know easier said than done, we were acquiring businesses for five to 10 years. We had legacy content coming in in the cloud, different providers all over the place. If you don't have that right, you're doomed for failure from the start. So you've got to make the decision and just say, for us it was get everything in box, single source of truth, establish that. Then metadata first, content architecture, you've got to get the right information attributed to that content. When we historically thought about metadata, I don't think anybody ever thought metadata was going to be something we were talking about on a podcast. If you had asked me that three years ago, I would've been like, what? No. Metadata is when the file was created and how big it is, who really cares? Who
Jon Herstein (43:52):
Cares? Right?
Jake McCoy (43:53):
Metadata for my file tells me who the client is. Is it land C sports? What venue is it at? What's the project code? All sorts of information that is so valuable. So if you've gotten that right and now with newer tools coming out every single day to help us get that right, that becomes absolutely incredible because it's no longer about what's in the file, the stuff in the metadata. Actually, I could have a 50 page file and what I really want to know about that file might not even be in the file anymore, and that's okay, which it may not be the display. Yeah. What's really incredible, maybe what I need to know about that file is actually, it might just be a song that is sitting in a particular show, but really what I need to know is that that is a rock and roll song that is part as a cruise ship show. That is a medley that is 45 minutes in duration. Those four data points are the most important things I could know about that song, not what's necessarily the notes on the page in the document.
Jon Herstein (44:50):
That is so spot on. And by the way, we've had a lot of internal discussion about this point of what you just described. Is that actually metadata or is it just data? Right, right.
Jake McCoy (45:00):
Yeah,
Jon Herstein (45:00):
I don't know.
Jake McCoy (45:01):
Yeah, we might need a new word for it, but yeah. So for me, single source of truth, then you got to get your metadata sorted, then you can do all the cool stuff. That to me is where you start to go from content management to intelligent content management. Once you've got your house in order, then you can do agentic workflows, document generation starting to link stuff together, utilize all those bells and whistles that help you really see the efficiencies. But for me, so much of this is about focusing on quality because at the end of the day, if you've done all of that work and your output is crap, then what was the point of doing any of this to begin with? What we're trying to do is raise the bar. So you've remained focused on quality throughout and follow those steps, and I think that becomes a really good roadmap.
Jon Herstein (45:49):
Incredible. I like to wrap things up with three concepts that are very near and dear to my heart in my role in customer success, which are value, culture, and experience. So I want to rapid fire a little bit here and just say from a value perspective, how do you measure, assess, or determine that these AI initiatives that you're driving are actually delivering value for the business?
Jake McCoy (46:11):
Absolutely. So I think for us, value is not something you're necessarily going to see in a dashboard at our company. Value is it is fewer late stage surprises. It is safer shows, it is faster onboarding, it is consistent delivery. It's the tangible things that I can see that my clients are talking about, that my team members are telling me, yeah, sure, it's great if I can see a nice stat that we've been a little bit more efficient doing these things. All of that is great, but really how are we moving the needle that is so important and in our industry, ultimately, and I've said this for a long time, not even when we're talking about enterprise technology tools, but just in general technology, in the entertainment space. If a guest or a fan never notices technology, then we've done our job. And that same principle is how we think about experiences and how we think about everything. It's ultimately, it is a tool that is there to be able to provide a specific function, but what we're doing is we're storytellers. So we're using all of the tools that we have at our disposal to make sure that we can put out the best possible story in the world.
Jon Herstein (47:22):
So you touched on experience, so maybe I'll jump ahead to experience two. How do you know you've created a great experience for either your employees, your end users, your customers, whoever the stakeholder is? How do you know you've done it
Jake McCoy (47:34):
So much of this, I mean, there's just kind of the real qualitative part of this where you're seeing things in scores, in feedbacks, in comments, in reviews. A lot of our clients are clients where people are going out and spending time on their holiday or taking vacation. So it means that actually a lot of the information about what people think about our product is publicly accessible. You can find it on TripAdvisor. You can go out and look at that content because people like to talk after they've come back on from holiday. So that's part of it. Obviously a lot of this is having conversations with our clients, but most important for me is actually getting out there. There's nothing more impactful for me than getting out there and seeing the look on a guest's face or on a fan's face when we've put a new experience in front of them, and that is just magical.
Jon Herstein (48:23):
I'm not going to ask you how many cruises you've been on in the last four years, but I'm guessing it's more than a few.
Jake McCoy (48:28):
Yeah, I've spent my fair share of time on cruise ships. I'll say that.
Jon Herstein (48:33):
That's awesome. And then the last one is just around culture and just any tips for people on tactics that you found that work particularly well in the area of change management? How do you get an organization to do things differently?
Jake McCoy (48:44):
Yeah, I think don't scare people off initially. Create a safe world for people. If somebody thinks they're getting fired by trying a new tool that doesn't promote a positive culture, right? So build a lab, empower some AI champions, get some work groups going, build some excitement that tops down rollout of a tool like this doesn't work. So what we're doing with AI is different than anything else than we've experienced. It's not a new system, it's a new way of working, and we have to get that in the minds of our leaders and in the minds of our teams and understand that ultimately we've got to get in there and have these conversations and make sure that we're working collaboratively.
Jon Herstein (49:24):
Jake, it's been an incredible pleasure for me to have this conversation with you. I learned a lot. I hope that our audience learned a lot and we appreciate both your time on the podcast, but also your partnership with us, your design partner with us. You've got early access to our products and you've given great feedback that's helped make our products better for all of our customers. So thank you. Thank you, thank
Jake McCoy (49:43):
You. Thank you so much, Sean. This has been a pleasure and thank you so much. Everybody at Box
Jon Herstein (49:47):
Take care. Thanks for tuning into the AI first podcast, where we go beyond the buzz and into the real conversations shaping the future of work. If today's discussion helped you rethink how your organization can lead with ai, be sure to subscribe and share this episode with fellow tech leaders. Until next time, keep challenging assumptions, stay curious and lead boldly into the AI first era.