AI-First Podcast

AI is scaling human expertise like never before.

In this episode of the AI First Podcast, Box Chief Customer Officer Jon Herstein sits down with E.A. Rockett, Adobe's Vice President and Legal CTO, to explore how AI is empowering experts across industries by providing the structure and tools to scale their knowledge. They discuss Adobe's unique approach to AI innovation, including the powerful "A-through-F" framework for evaluating AI use cases and the vital role of ethics in creating a trusted AI foundation.

Key moments:
(00:00) The best prompter for AI is AI itself
(00:34) Introduction to the AI First Podcast with Jon Herstein and E.A. Rockett
(02:14) E.A. Rockett explains his role and the concept of being a legal CTO
(03:23) Motivation behind blending legal expertise with AI innovation at Adobe
(04:23) Accelerating iteration and scaling AI to personalize solutions
(05:32) Adobe’s posture on AI innovation and balancing risks
(06:23) The journey with Firefly and creating safe, licensed models for customers
(07:15) Introduction of the A through F framework for AI governance
(09:01) Internal process for submitting AI use cases for evaluation
(10:09) Creating an enabling, not restrictive, environment for AI use
(11:33) Balancing creativity during hackathons with governance in AI use
(11:56) The role of ethics in Adobe's AI framework
(13:06) Ethics being integrated throughout AI projects, not just at the end
(14:20) Discussing where Adobe draws the line on AI use cases
(16:24) AI’s role in retaining talent and keeping employees engaged
(18:18) Attracting A players by providing access to cutting-edge AI technology
(21:06) The GIFTS framework: scaling expert knowledge through AI
(23:23) Using AI to create expert-level solutions even for non-experts
(29:36) Emerging hybrid roles due to AI, especially in legal and compliance fields
(32:33) Challenges in training junior employees while using AI
(34:06) Overcoming resistance in launching AI projects at Adobe
(42:39) AI unlocking scale in legal and compliance fields
(44:22) Filtering through AI to focus on relevant information
(46:47) Addressing AI risk and AI’s role in supporting human creativity
(50:14) Advice for CIOs and legal leaders behind on AI readiness
(53:30) Closing thoughts on the future of AI and its intersection with legal and tech
(59:04) How Adobe measures success of AI initiatives based on user engagement

What is AI-First Podcast?

AI is changing how we work, but the real breakthroughs come when organizations rethink their entire foundation.

This is AI-First, where Box Chief Customer Officer Jon Herstein talks with the CIOs and tech leaders building smarter, faster, more adaptive organizations. These aren’t surface-level conversations, and AI-first isn’t just hype. This is where customer success meets IT leadership, and where experience, culture, and value converge.

If you’re leading digital strategy, IT, or transformation efforts, this show will help you take meaningful steps from AI-aware to AI-first.

E.A. Rockett (00:00:00):
The best prompter for AI is AI itself. Write me a prompt that will solve my problem. And then it does that. And then you say, run that prompt. This has worked effectively and effectively. I know you don't use medicine or vaccines as a use case, but where it'd be legal or finance, you go across the marketing, you go across the back office and creating this foundation now gives you a solid structure. Now, somebody who may not be an expert can start doing runtime stuff, asking that thing questions if the expert were in the room.

Jon Herstein (00:00:34):
This is the AI First Podcast, hosted by me, John Hursting, Chief Customer Officer at Box. Join me for real conversations with CIOs and tech leaders about re-imagining work with the power of content and intelligence and putting AI at the core of enterprise transformation. Hello everyone, and welcome to the AI First Podcast, where we dive deep into the future of enterprise transformation with the leaders who are making it real. I'm your host, John Hursting, Chief Customer Officer at Box. My guest today brings a very unique perspective to the intersection of law, AI, innovation, and technology. EA Rocket, the vice president in Adobe's office of the general counsel, is also widely known as Adobe's legal CTO. From building expert-powered AI frameworks to redefining governance of scale, Rocket is helping Adobe navigate one of the most ambitious generative AI transformations in the industry. In today's conversation, we'll unpack how Adobe's legal team became a launchpad for ethical innovation, explore Rocket's safe and trusted model, and dig into the bold idea that AI doesn't replace experts.

(00:01:45):
It finally lets us scale them. This is a conversation about enabling trust, transforming legal culture, and getting enterprise AI right. And with that, let's get right into it. Rocket, a longtime friend of Box and a longtime friend of mine. You are vice president in Adobe's Office of General Counsel, as I mentioned, but you're known as a legal CTO, which is a bit of an unusual title. So can you share a bit more about your background, your role, and what it means to be a legal CTO?

E.A. Rockett (00:02:14):
Absolutely, John. And thank you for having me, so thrilled to be here today. The legal CTO moniker essentially comes from ... I am a lawyer, but I was originally and still am a technologist. And in my role at Adobe, in addition to legal technology, the enablement of legal technology, because legal technology is pervasive throughout an organization, maybe Adobe, maybe Box. In addition to that, I am part of the AI enablement team for Adobe at large. We call it AI at Adobe. And then lastly, I am one of the customer zero teams into our document cloud business. So we're testing things way in advance, suggesting things to be part of the feature in the product roadmap. So if you combine those three things, that's essentially where the moniker comes from.

Jon Herstein (00:02:58):
Got it. And this was a title that you had, I think, even before you were at Adobe. I think it's been a sort of a calling card of yours for many years.

E.A. Rockett (00:03:05):
Absolutely. As I'm going for company to company, this kind of convergence of these different disciplines, if you will.

Jon Herstein (00:03:11):
Yeah. So what motivates you in this role where you're kind of combining your background as a lawyer, your legal expertise, but also your technology passion to really champion AI innovation at Adobe?

E.A. Rockett (00:03:23):
What really thrills me, John, and really gets me excited when I get up in the morning. We're at this special moment in time. I've said for years, I'm sure you've said similar words, code beats paper. We're biased towards action. The issue is that to get to that action, to get to that code, to get to the things that made a meaningful difference, I mean, there was a lot of payload associated with that, right? Everything from program management and product management. And I mean, a lot of key aspects of those things remain super important today, but the velocity has accelerated greatly. And I'm sure you can share similar stories. The longer that trajectory is to get from that idea to what we want, it kind of starts to get diluted along the way. So the idea, the thing that you were trying to do, that problem you were trying to solve, the way that you were trying to solve it, the quicker that you can solve it, get that first iteration, and then of course fail forward the way that we often talk about, that is key.

(00:04:23):
So I'm excited because it's like the first time that we're seeing that velocity of iteration that we always wanted and that velocity of iteration is also allowing personalization personalization at scale. There's some lawyer who does antitrust. I can solution for you. There's someone on the sales side. I can solution for you. So both that velocity as well as all the scale associated with it.

Jon Herstein (00:04:48):
So you feel like in your role now with the pace of innovation, you can bring solutions to your business much quicker than you've ever been able to in the past?

E.A. Rockett (00:04:56):
At phenomenal speeds. I often mention, we've been waiting for this day where there'll be a very short turnaround between the light bulb moment of, oh, I fully understand the problem and I know what solution will solve that problem.

Jon Herstein (00:05:11):
Well, it's an exciting time. And clearly Adobe is one of those companies that is at the forefront of generative AI with tools like Firefly as one specific example. So I am curious, we'll talk not a huge amount about this, but I'm curious from your legal perspective, how would you describe Adobe's overall posture on AI innovation and how you balance that with risks?

E.A. Rockett (00:05:32):
Absolutely. So I'll go through what we do internally in our AI at Adobe committee or work stream, as they call it, which is everything comes through there for internal enablement. But even before I get to the AI at Adobe piece of it, being there for this journey, which began with Firefly, I mean, we have much more AI solutions now, but our first entry point was Fireflyer generative AI model for images. I mean, it now does more of those images, it does video, it does audio, it does much more. But being along for that ride, I can't tell you, John, how blessed and thrilled I feel because you're just kind of sitting there doing your day-to-day and what I'll call now the old paradigm. And then we have this generative AI paradigm, which is already an amazing experience to go through. And then the company I was sitting at developed its own foundational model.

(00:06:23):
And in that, we dealt very early with some of the questions like, how was the model being trained? And in our case, we trained off of licensed content, which gave our customers the security that our solution was commercially safe. And then from that, that just quickly rolled into, and then we can indemnify you and really gave us a moment not just to deliver technology, gave us a new way to think of what does safety really mean. In our case, we call it commercially safe. What does commercial safety really mean? And if it is safe, then I can indemnify it. Similarly, after the first Firefly models were out there, we wanted to enable our entire enterprise to use AI for all kind of things, our own home cooked AI plus other third party AI is technology. We should use technology, whether it's home cooked or cooked elsewhere to do great things for us.

(00:07:15):
And we were confronted with the same question box. You probably confronted it there and many other enterprises. How do I enable this full plethora, portfolio, if you will, of technologies quickly, but safely? And so in this AI at Adobe initiative, we cooked a A through F framework. I'll share it with you, John. Other companies, other people will describe it differently. But I mean, I think the core fundamentals, the core elements are essentially the same for everybody. And we call it A through F. So they're six letters. So A is, which team B is what technology, what foundational model? C, what data are you going to input into your use case? And I know we're going to double click into that, what you're putting into your use case, but that's C, what data you're putting in. D, what are you expecting to come out? E, who's your audience?

(00:08:10):
Who are you going to give it to? Is this internal? You give it to your customers. Are you going to put it in the website? And then F, what is really your overall objective? And we use those six letters or six levers, if you will, to really explore each use case. And here is really the unlock, at least as I've appreciated. It's not about good use cases, bad use cases, risky use cases, not risky use cases. You're taking those six elements, and again, different companies can describe them in a different way, but you're dialing it in. For this use case, well, don't use this technology, use this other technology. Well, for your input data, can you use non-confidential information? So if you dial those six levers or dials, if you will, you really can quickly enable all kind of teams to move fast and get their AI IQ up and really be productive.

Jon Herstein (00:09:01):
So is the process then that someone in the business or someone in IT says, "Hey, I have an idea for a use case that I think could be powered by AI." And they come to you and the governance committee with the A through F sort of framework filled out for analysis review and maybe some recommendations back?

E.A. Rockett (00:09:17):
Exactly. So we have 30,000 in change employees and all 30,000 have access to the portal and it asks a bunch of questions similar to a DMV application. They don't necessarily realize they're answering A through F. And then it just goes through our process. And we have fast track. You're just experimenting. It's just internal, it's just for research all the way to, no, I want to use this in something runtime and something for production. The use cases have different attributes for each element.

Jon Herstein (00:09:45):
That's really interesting. We've had a lot of conversations with other leaders just around how they've approached governance and it does just become a place where you've got a group that's just saying no to everything and making it really hard or actually seen as more of an enabler. It sounds like what you're trying to do is understand what the business is trying to accomplish and then maybe help guide them and put guardrails around it as opposed to say, "No, you can't do that.

E.A. Rockett (00:10:09):
" A hundred percent, John. And then, I mean, we've been doing it, I can't remember how many years now, because it feels like for a long time, but the AI is still relatively short. We're starting to develop patterns. So we're like, "Your use case matches this pattern which we approved ready or good." So that's also enabling our velocity for

Jon Herstein (00:10:26):
Enabling. You mentioned 30,000 employees, but what's the sort of volume of these forms that come through for the frameworks scoring?

E.A. Rockett (00:10:34):
Here's the answer that I think most tech companies in particular, software companies would appreciate. We have hackathon and garage weeks, and so let's sit around for a few weeks. I mean, people part of this committee have other jobs, but the volume when it's not like a garage week or a hackathon is relatively reasonable. Then you hit those garage weeks, you hit those hackathons where people want to use AI in those-

Jon Herstein (00:11:00):
More ideating and ...

E.A. Rockett (00:11:02):
Yes. And then you just see it rise up. And that really is our challenge because when they get to those moments, these hackathons, these garage weeks, that's what we really need to enable because everybody's in that ideation mode. So that really is where we're challenged to make sure that we really do that proper balancing and that proper dialing so that we can get the benefit of those exercises as well as that's one of the reason people come to companies like Adobe, companies like Box because they want to participate in such activities.

Jon Herstein (00:11:33):
Okay. That is fascinating. And I think that's something that most of our listeners could learn from is taking that very structured approach. And I think many, many companies are on that path of developing a framework for their governance, but I think you're pretty advanced. Now, we didn't talk about ethics, but you talk about ethics a lot. And so I'm sort of wondering, where do the ethics of AI come into play when you're evaluating through the framework?

E.A. Rockett (00:11:56):
The way that ethics plays, and I think this is rooted in how we begin with our Firefly model of being commercially safe. We've always treated ethics super important. We know ethics can be a difference between people using something, not using something, being comfortable with something, not being comfortable with something. Ethics is basically woven in through our A through F processes, woven in through our Firefly model and other models, which we've put out in there. We view ethics and the appropriate use and appropriate governance of ethics as a super positive thing because we've all been there before and you have some window open, pick your foundational model of choice. And it may not be so much that what it's giving you back is jarring you or something like that, but it's just way off. So ethics goes all the way from, it's just way, way jarring all the way to things being tuned to tone and your point of view and things like this.

(00:12:52):
So when you do that, I think the user is just way happier. So ethics not being as a laststop on the bus, but as being a continual thread all the way from experimentation to use and productivity.

Jon Herstein (00:13:06):
I mean, you sort of use the term ethics must come before procedures. And that's sort of what you're saying here is that it's not like go through the whole process and at the very end go, oh wait, is this ethical? It's like, no, no, no. Build it in from the very beginning before you even go through that process. That's what you're describing.

E.A. Rockett (00:13:21):
100%, John. And I almost think about it, if we put technology to the side, if two people are going to have a conversation about a topic, let's just two people be in conversational with each other. Well, think about, I'm going to have a meeting with John about this. How am I going to approach John? All of that bedside manner, if you will, of how humans just engage with each other, that's like a subcategory of ethics. It's about behavior. It's about how one engages about inputs, it's about outputs, and we're merely translating that and that is equally applicable to technology. So a hundred percent, John, that is way, way upstream and definitely not downstream.

Jon Herstein (00:13:59):
This isn't exactly a question about ethics, but it's a little bit maybe more about culture, but you've talked about the framework for reviewing and looking at AI solutions, but do you also have a very clear stance on where Adobe won't use AI internally where things you say, "Well, that's a thing that we only ever want humans to create or to define," or is it everything sort of open for discussion?

E.A. Rockett (00:14:20):
It depends less about the AI, but in our A through F framework, who's your audience? What are you going to do with it? What's your objective F? Because let's say I'm in some sensitive group and I'm using AI to get back information that whose output is sensitive, am I going to ingest that, including saying, "I agree, I disagree, all of these things before I email it to you, John." It really is a combination, quite frankly, of human and machine. Actually, that's why I have that helmet sitting back there, John. I'm a big Formula One fan for F1 fans out there. I do go- karting myself. But the reason I like Formula One, John, is because people don't win those races, Lewis, Hamilton, Maxwell Staffing, they don't win those races because they're only a great driver, nor do they win those races because it's only a great car.

(00:15:15):
It's the combination of the two. So in response to your question, since this is the same human plus machine, we have not necessarily said no, no, no, in these areas. Rather, we've said human plus machine.

Jon Herstein (00:15:30):
Got it. Okay, that makes perfect sense. So it sounds like it's going to be a bit case by case, but your general assumption is going to be it's going to be some combination of technology plus people, technology that may or may not include AI. It'll just depend on what the right answer is for each situation. Now, a lot of companies that we talked to, at least in the early days of AI, have really been focusing on AI primarily as something that's going to help save costs to drive efficiency, make things more operationally effective. But you've also pointed out that it's a way to retain talent, which I think is really interesting because I think everyone's worried about AI as replacing talent. Can you explain a little bit about what you mean? How is AI going to help you retain your A players? What's the connection between employees, loyalty, and companies being open to bringing AI into the environment?

E.A. Rockett (00:16:24):
100%. I'm always very candid. So here's my candid position. When I heard across the landscape, I heard people talking and I saw the articles on LinkedIn about AI for productivity and AI for efficiency. Sure. Any technology can help with efficiency, help with productivity, help with cost optimization. But it kind of surprised me a bit because my natural genuine response was more like when a new chip comes out that your computer runs faster or a new phone comes out that has more metapixels in the camera or something. In the technology space, and I think I know for a fact you're in the same group with me, John, we love when new stuff comes out. We can't wait to play with it. When we play with it, will efficiencies come? Sure. Will productivity come? Sure. Will cost optimization come? Sure. After we figure out what we're going to do with it, but we first play with it.

(00:17:19):
And I love being here at Adobe because they let me play with technology. I think playing with technology is an employee benefit, such as people pick where they're working based upon the health insurance plan, the dental plan. We went further in Silicon Valley, we put arcade games and pinball machines in the offices, and then we did all kind of perks so that people would have fun at work and want to work more and be comfortable. Speaking of on the compute side, and when they joined our organization, we're like, "What computer do you want? We'll get you the fastest one you want. " This is part of our culture to let people play with these shiny objects and see what they can do with them. So to get to the point, I fundamentally believe, yes, efficiency, productivity, cost optimization, that just automatically comes. The number one thing for me is when you're attracting A players, A players want to be at a place where one of their benefits is that they get to play with cutting edge technology.

Jon Herstein (00:18:18):
If you've got a technology environment where you're saying no to that stuff and really restricting who can access it, you're saying that will have a downstream impact on people saying, "Well, this is really the kind of place I want to work or not. "

E.A. Rockett (00:18:29):
A hundred percent. I mean, hopefully this increases where we all go out to have a bite with buddies after work where it'd be buddies at Adobe or buddies at other companies right now, when we all sit down respecting confidential information, of course, we're sharing like, "Hey, I just tried this and what this did, and I just tried this and this, what this did." What I love about your platform, John, is you could switch between foundational models and that's people just trying to see what these things do. Raise our IQ. Imagine if you went from that sit down, grab some topos with some friends and everybody's talking about the cool thing that all these new models do and you can't participate in a conversation because where you work, you can't touch it.

Jon Herstein (00:19:13):
There's a former leader at Box that used to have this ... I'm sure you still use it, this expression, show me your technology stack and I'll describe your culture. And I think that's exactly what you're getting at, which is if you have a technology stack that's old, outdated, not modern, doesn't take advantage of these news capabilities, you're likewise going to have people who are comfortable in that environment versus one where they're innovating and driving change and so forth. So I think it's even more true today than it's ever been.

E.A. Rockett (00:19:38):
3000%. Now yes and you. Yes. And when you can make your A players happy, they stay like you were suggesting before, that's great for retention and they're more productive because work doesn't feel fatiguing. Conversely, if an A player leaves, whatever cost optimizations, efficiency, productivity you thought you could scrape up from any technology is nothing compared to the value of an A player walking out the door. I've seen A players at various industries when they walk out the door with that institutional knowledge and the, I'll just call it a loss because it's more of an economic loss, it's knowledge, loss, all those things. It's tremendous to the group, to the organization, to the enterprise. So I place high value on retention, definitely retention of A players and make those A players happy, reduce that work fatigue, increase that work joy. I think there's a recipe there.

Jon Herstein (00:20:33):
Yeah. So I want to talk a little bit more about this idea of empowering the experts. So you talked about this, the A players, meet the people who have a lot of that institutional knowledge, have a lot of knowledge in their domain, the work that they do and so forth. And I'm sort of wondering how you're thinking about AI really empowering those folks and really allowing the company to leverage all of that knowledge. You have this concept of the GIFs framework, and I'm sort of wondering if you could describe what that is, because I don't think it's common knowledge, and how does it help Adobe deliver that sort of idea of expert power at scale?

E.A. Rockett (00:21:06):
Thank you, thank you. Thank you for even asking that question. I love talking about ... I made this up. It's not an Adobe thing. It's a rocket thing. I call it the GIFs framework. GIFS stands for generative intelligence, foundation, task, sessions. As I describe it, I think each one of those words will come to life, including generative intelligence so much in AI, but this is really the combination of human and machine. And we know that since compute came to be, it was always human and machine. How does human plus machine translate in this AI atmosphere, this AI world, and enabling great AI solutions? So this gift framework essentially, think of two things. So there's this foundation over here, and there's what you're doing at runtime over here. So on the left is the foundation. On the right's what you're doing at runtime. Let me talk about this foundation.

(00:22:01):
And this foundation is where you leverage your experts. And we can finally, John, get scale. We talk about scale all the time, but do you see true scale? How do you get true scale? So here's a foundation model that you get true scale, has five layers. Layer one is the foundational model that's best for whatever task you're trying to do. It could be a particular LLM, it could be a particular diffusion model. In Adobe's context, you can now pick for a variety of models, including Adobe's native and others that are out there. We announced that most recently. So at the base, layer one is the foundational model. And what's happening there? Generative AI has used probabilistic technology to have some general knowledge. That general knowledge could be speaking English, speaking French, speaking Spanish. In language context, layer one, it learned to speak English. Got to check.

(00:22:55):
Then you go, give me the vaccine formulation for COVID, let's say. And it goes, how am I going to answer that? It doesn't have enough information. So it needs layer two. Layer two is essentially subject matter specific knowledge, also known as

Jon Herstein (00:23:14):
Iranic. Sometimes that knowledge is proprietary, and that's an important element of this as well. It's not general knowledge, but it may be proprietary to your business, your domain.

E.A. Rockett (00:23:23):
I'm glad you mentioned that. That layer two knowledge, that's the main reason it's not in layer one usually. It's proprietary. There may be trade secrets in there. Maybe just the curation of layer two content within itself is a craft because there could be conflicting things in layer two. So layer two, the selection of what goes into layer two, which could be confidential information, trade secret, different bunch of sensitive things, which are not part of a foundational model. So if you're a pharmaceutical company, you loaded a bunch of the formulas for vaccines that it worked. So that's your layer two. So now it's like, oh, I can speak English and I know something about the formulation of vaccines. And remember layer one is probabilistic, general knowledge. It took general text and reversely engineered and learned English. Layer two, it took the formulations of these vaccines and reversely engineers probabilistic technology, reversely engineered.

(00:24:16):
Oh, okay. I see how these things are constructed or what they comprise. And then you say, give me the formula for the COVID vaccine. It goes blue. And you're like, no, that doesn't work because you're missing three more layers in my gifts framework. Again, layer one, general knowledge, layer two specific eyes. Layer three. Three and four are determinatives. Three or four is where the human comes in with the machine. You sit down with your expert and you go, "Hey, you're an expert scientist who does vaccines. Give me all the ways that you approach formulating a vaccine layer three." Those are the instructions how an expert would approach it. Layer four is all the things you would never do. You would never do them because you tried them and you know why they don't work. You wouldn't do them because you tried them and they don't work because you don't know why.

(00:25:00):
Or others have told you in publications you know they don't work. And then lastly, layer five. If you have four layers of information, general knowledge, layer one, specific knowledge layer two, what an expert would do layer three, what an expert would not do layer four. I'm going to sit there and write a prompt that's going to make this work effectively. The best prompter for AI is AI itself. And I know there are articles all over the internet and particularly LinkedIn about this. And in layer five, you tell AI, "This is what I'm trying to do. These are the layers you have accessible to you, although I know that as layer one because it is layer one. Write me a prompt that will solve my problem." And then it does that. And then you say, run that prompt. This has worked effectively and effectively. I know you'll use medicine or vaccines as a use case, but where it'd be legal or finance, you go across the marketing, you go across the back office and creating this foundation now gives you a solid structure.

(00:25:59):
So that was over here. The foundation was the left. Now, somebody who may not be an expert can start doing runtime stuff, asking that thing questions if the expert were in the room. Let's say I'm not an antitrust lawyer, but I have questions about antitrust or I'm not an expert in finance, but I have some questions about P&L. I have some questions about these things. So the non-expert can engage. The expert can as well. The experts and the non-experts can engage with this foundation, this five-layered foundation for very effective, very consistent, very trustworthy use of AI to solution for them.

Jon Herstein (00:26:37):
And that whole framework is then that allows you to take the expertise that you have in an organization and scale it out and make it valuable for people who aren't the experts.

E.A. Rockett (00:26:46):
So John, I love what you said. And where that takes me to, and this brings me back to when we first met, we always knew, you always knew, I guess I learned this from you, we always knew that if you could get your data in one place, your data, your content, and you could curate it and you could tag it, great things with results. So we had traditional ways of leveraging that for sure. And those were great ways, deterministic compute ways, querying a bunch of things. We're at a stage right now that is the fuel. That is the fuel for this generative AI because that layer two, those documents, that content, that data, that's sitting in the curated stuff. And quite frankly, as the experts are trying to figure out what they need to put in layer three and layer four, because they just do it, they never wrote it down, let's say, right?

(00:27:42):
They can also leverage that same repository to get the right content into layers three and layer four. And the interesting thing is, if I were to turn to somebody and say, "Okay, here's my framework and there are five layers." Now, layer one is done. That's the foundational bottle. Give me layers two, three, and four. They're like, "Whoa, dude, that's a lot." But what if I said to them, "If here's a whole curated repo, it's in your head too, but it's in the repo mostly and it's in your head and you'll be able to fill out layers two, layer three and layer four." And that is why we are finally beginning to even expand further. We were getting great results before, expand further the importance and the value of curating and tagging our

Jon Herstein (00:28:28):
Content. And what's so sort of crazy about this is, as you said, some of it's in people's heads, but a lot of it is literally in documents and systems that are inside of an enterprise today. So it's not like you got to go find it. I mean, you do have to go find it a bit, but you can find it at least within your own four walls. There's a bit of work there. There's the tagging and all that. But what's cool now is that that can all also be done automatically. Whereas before you would've said, "Hey, I've got all this old content. I need to understand what's in it. I need to go have a team of people go through it one document by one and tag it manually, and it's probably going to be wrong. It's probably going to be out of data as soon as it's done." You can actually automate that whole thing.

(00:29:06):
And now you've got a corpus of knowledge that's actually

E.A. Rockett (00:29:09):
Valuable. You now have a corpus of knowledge. We should put that on T-shirts and hats and stuff.

Jon Herstein (00:29:14):
That's what's going to be on the hat. That's

E.A. Rockett (00:29:16):
What we're going to do on the hat. You're not going to have a corpus of knowledge or something like that. We'll figure it out.

Jon Herstein (00:29:20):
What does all this mean in terms of roles when you think about not just on the technology side, but also on the legal side and think about things like compliance and ethics and all these things, what are the sort of new hybrid roles that you've already seen emerge or you suspect may emerge as a result of all this?

E.A. Rockett (00:29:36):
Maybe I'm an old-timer mentally or something like that, but I think it is as it was, which I'm thrilled about, which is this, since the beginning of time in civilization, humans have always respected experts and expertise, the person who could do horseshoes, the person who could make soars back in the days of King Arthur. Expertise has been revered throughout the course of time through various ... Cultures throughout humankind. And along the way, we needed people to help the experts. The way it used to be, the people who helped the experts were apprentices and they were learning the mastery of the craft. So you wind up getting more experts. The expert needed assistance, but the assistants were learning to be the expert. You wound up getting more experts. More people could do horseshoes in the village or more people who could forge swords.

(00:30:26):
Somehow we got away from that. And the assistants were assisting. The human assistants were assisting, but they weren't on a trajectory to become an expert. And in your corporate context, they sit down with you four times a year, once a year and go, "What's my development path?" I think we generally in technology culture, we want everybody to be an expert in something and something they're into because then they really thrive and they really do amazing things. I think the change is going to be this, that we're going to go back to, as I'm saying is as it was, we're going to go back to we do need that assistant track. That is an effective method, whether somebody's going to be a forger of swords or be a lawyer or what have you. That's still the effective path to train somebody. Train somebody into the art, training somebody into expertise and where they can take it to the next level.

(00:31:22):
Now, can we please use this technology to take the weight off of some of those tasks so that when the people are assisting, they have AI as an assistant so they can do their assisting, but focus on their other tasks, which is to continue their journey, if you will, to becoming an expert in something that they're into. Yeah,

Jon Herstein (00:31:42):
Because even in that sort of apprentice role, there may still be a lot of tasks that you learn how to do it fairly quickly. There's repetition, but it's just work that needs to be done. It may not be helping you master the craft at that point. And I think this is one of the things that I've heard concerns raised from leaders is, well, if AI's going to do a lot of that grunt work of the low level person in whatever role it is, the first year lawyer, the first year accountant, whatever it is, how are those people actually going to develop the expertise so that 20 years from now, they are the senior partner in the law firm. They are the general counsel and they've got all that experience. How do you replace or find a way for the people to get that same expertise if you're sort of outsourcing some of that lower level work to AI?

(00:32:27):
I don't know the answer to that. I don't know if you've thought a lot about that question, but I think it's going to be an interesting challenge.

E.A. Rockett (00:32:33):
I love your question, John, and there's going to be a variety of questions like those that I know you and I chat about questions like these all the time. For that one, I mean, it's just a guess, right? I think it's kind of like Karate Kid. Remember who was painting the fence, wax on, wax off. I think it's going to be a number of iterations as these apprentices, as these assistants are going from the various levels of their skill to expert that we're going to want them to do so that they can understand. It's about an understanding. It's about an experience. It's about those things that you pick up when you do something. And then at one point you're like, "Okay, I know what that is. Now AI, the machine can do it. " But that's just a guess.

Jon Herstein (00:33:16):
I think you're right. And it's probably going to vary by profession, by organization, et cetera, but everyone's thinking through this and how do you find the right set of things so you can still build skills and people while also taking advantage of these AI capabilities. So let's talk a little bit about, maybe more than a little bit about challenges and lessons learned. So we talked a lot about philosophy and frameworks and so forth, but let's get into a little bit of the nitty gritty. And I'm sort of curious about a few things. When you first started launching AI initiatives at Adobe, and we'll talk about AI specifically, did you run into resistance, analysis paralysis, a fear of moving forward on things? And if so, how did you help drive through that? How did the other folks who were innovating on AI drive through it? And was the A through F model helpful in any of that?

(00:34:06):
How did you get beyond maybe initial friction in rolling some of these things out?

E.A. Rockett (00:34:11):
In my case at Adobe, and a lot of this probably is because we had our own foundational model, Firefly. It was much more biased toward, let's get a machine that's working, working quickly. Let's figure out what are the risks that we care about. Because I mean, if you sit down with somebody, compliance, privacy, you name it, they'll give you a long laundry list of risks. Which one of those risks pertain to you or you care about is a different story. So we were really biased towards velocity, bias towards which risks truly concerned us. And I think every organization just truly has to go through that exercise. But I will tell you one of our biggest unlocks, and I've spoken to many CIOs, CTOs, COOs, very interesting discussions on each company approaches this. From my perspective, you need a hard wall between experimentation and going production live with something.

(00:35:09):
Experimentation, production. And the reason is people over here at experimentation, they're just trying to figure out what the thing does. I've talked to people in the other companies who are in experimentation mode and they're being asked about KPIs. We're in a technology sector, there is an aspect of our jobs that's like forecasting belief in the future, belief in whole belief in what will come. And there's kind of a wait for it thing. If we let the troops experiment and let the experts experiment, the things that come across are going to be amazing because you don't experiment going, I'm going to solve this problem. The experiment, you know the capabilities, you know all these Lego pieces and you know how to do it. And then you're in another meeting completely not related and you hear the problem and you're like, "Oh, I know based upon my experimentation how to solve that.

(00:36:06):
" Now you can give me KPIs and all that.

Jon Herstein (00:36:09):
Sure. But you're just saying be very clear and explicit, I think, about these sets of things that we're doing over here are experiments. Nothing may come of them. There may be no return. There are no KPIs and you probably need to do that in a limited way because you can't just throw unlimited resources at. And there's another set of things where it's like, we've committed to doing a project or an initiative or something and we're doing that and it's going to take advantage of these capabilities, but that's a different thing because there will be expectations around a completion date and outcomes and so forth. That's a different category of activity.

E.A. Rockett (00:36:42):
One of my things I'm into, which is crosses and technology, is songwriting, John. When I'm songwriting, this thing that I'm into, which I do use technology and digital audio workstations and all this stuff, I know every song I write is not going to be a hit record. Quite frankly, if 10% of the songs that I write were a hit record, I'd be winning Grammys Left and Write. I mean, I think that we have forgot how much iteration it takes to get hit records. But to your point, John, you ring fence, how much time, how much effort are we writing new songs versus pushing out the songs we wrote before or printing vinyl or whatever you're doing. So I think each enterprise has to define its resources, each category, but then set expectations accordingly where productize things, production ready stuff, KPIs, experimentation, more like in the hit record category where you're going to get hopefully high single digit effectiveness.

(00:37:44):
But if people are writing hit records, you're like, "John, I know you're writing down the garage writing records. We need 50% of them to be hit records." You'd be like, "Dude,

Jon Herstein (00:37:52):
50%." Are there examples and anything you can share around an AI use case where maybe after some experimentation you all decided we are not going to pursue that either because it didn't work or it wasn't that helpful or I don't know, maybe you're like, it just wasn't a good idea. AI,

E.A. Rockett (00:38:07):
Although it's probabilistic, not deterministic like the compute we've been using for many years, it's still just software or software plus content. So what has happened more times than I can count is the way that we thought we were going to get there, that didn't work. The foundation model we thought we were going to use layer one, that one wasn't a good one. Artifacts that we were going to put in from our curated knowledge of everything, we put the wrong ones in. A lot of iteration, a lot of course correction along the way and just in the spirit of candor and just how technology works, we often go back and look at what the solution actually turned out to be and where we begin. So it really is just a failing forward thing all over again.

Jon Herstein (00:38:54):
Right. And then we're very good at that in Silicon Valley, right? Exactly. Fail fast, learn from it and move forward from there. Yeah. I would love to get your perspective on something that you may be uniquely qualified to talk about, which is everyone's got concerns, let's say, around compliance and what does it mean to be using AI and so on and so forth. And as of this moment anyway, there is not a lot of external regulation. We tend to not be in regulated industries where we sit, but how are you thinking about and navigating and helping Adobe navigate in your role inside the Office of the General Council compliance where you don't have a lot of external regulation or a lot of external guidance on what you can and can't do?

E.A. Rockett (00:39:35):
This is going to sound like a highly non-technical and non-scientific answer. We are really grounded in trying to do the right thing. It's interesting because, at least for me, when you see these regs come out, they may be written very long and big bodies of text, but if you take a step back, they're like, "Oh, they don't want people to do this because that injures people. " Or that could harm people or somebody's personal health information, say the whole health privacy thing, Rocket's personal health information being out there is not cool for Rocket. Not that we have some magic wand or that I have a magic wand to foresee what regulations could be, but I think we get really close when we don't look at regs and go, "What does the reg say and I'm compliant and I'm good and I don't know what's going to come." That part's important, don't get me wrong.

(00:40:33):
I'll talk about our two companies here, have great cultures, have great communities of clients, have clients that trust us. That trust, yes, we give software they could trust, but that trust is also even more so standing on the shoulders of how we behave. So if we go back to our first principles of how we behave, which is the first reason they came to us to begin with, that gives us tremendous foresight into doing things, not because a reg existed, but because we knew it was the right thing to do. And if a reg comes or doesn't come to address it, we're sitting in a good situation.

Jon Herstein (00:41:08):
I was just thinking that, that if you begin with that good intent, the right ethics, we have a value box called Make Mom Proud, and you just think like, okay, let's ground it in that because that's the company culture and it happens that there's no written regulation. I'd love your point that in the future, if there is a regulation, there's a pretty good chance you're going to be compliant with it because you've done the right thing from day one. Well, the details may matter, right? So exactly how you did it and how you crossed the Ds and dotted the Is, but the intent and the spirit, you'll be in very good shape if you handle it that way.

E.A. Rockett (00:41:39):
100%. I mean, we have a very sharp lens. We have various different customer segments. I'll name two. We have Adobe Max where there's a bunch of creatives who've used Creative Cloud, right? Completely tuned into them, and that's an Adobe Max. And Adobe Summit, we're tuned into all of our people who are doing marketing as a day-to-day job, right? Just being in tune with that audience. And I think you just said, make mom proud. Make mom proud.

Jon Herstein (00:42:01):
All right. So let's turn our attention to the future. So I'll ask you to pull out your crystal ball a little bit here and ask you a few thoughts, but if we can sprinkle in here some advice for peers, because you all are pretty far along, you've thought a lot about this stuff. Not everyone's quite at the same place. So anything in here that you think would be useful for others who are maybe a little bit earlier in the journey, but let me start with this. What AI developments do you think are going to unlock scale in particular for the areas of legal and compliance in let's say two, three, maybe five years? I know five years is eternity right now. What do you think is happening in AI or likely to happen in AI that's really going to unlock scale and legal compliance?

E.A. Rockett (00:42:39):
I think in legal in particular, the issue has always been volumes and volumes and volumes of text, whether it's contracts, whether it's company policies, whether it's case law, whether it's laws, whether it's the congressional minutes that created the law that give you an idea of why the law was created. Just volumes and volumes of text and very difficult for definitely not one human, but even teams of humans to consume. And then after they've consumed all that, they have to try to figure out what does that mean? I think it's going to accelerate all that. So because what we wanted to get to was a discussion over what does it mean? What does it mean for our company? Was it mean for your company? Was it mean for our customers? But there was this huge payload because just quantitatively, it's just too much. And you would just throw humans at it to try to like, okay, if a bunch of people read all this stuff, then we can mine these volumes of content and then have it basically pick out what's pertinent to what we're thinking about.

(00:43:45):
It just zones us in. And I'll go back to something I mentioned before. I'm heavily into avoiding intellectual fatigue. So the more that my experts and the apprentices are along for the ride to become experts, the more that they can get at the issue at hand, the better the result is most likely versus them drifting or what have you, because they're just trying to deal with this container of information that's too much. So I really think that is the huge unlock in the legal space and many other spaces that by nature for right reasons. Step one is just a lot, too much to consume.

Jon Herstein (00:44:22):
So do you see AI as being a tool in our tool belts to help narrow that down and help us maybe not completely solve the problem, but say, "Look, of all the stuff that's out there, here's the stuff that's really relevant. Jermaine's the thing you're trying to solve, focus on this and make the right call based on that. " Is that kind of how you think about it?

E.A. Rockett (00:44:43):
Absolutely. And I'll yes Andrew as follows. We use a lot of words and culture and even in corporations about focus, about speak strategic, all these words. For me, it means apply the subtractive method because if you have all these things you can do, but you subtract that, I could focus on this. You have all these things you can do, but I'm focused on this. I could be strategic for this. So a subtractive method approach, I think it is highly effective. We see when we apply that method, how well it works. Also, it becomes a filter because it's like, "Hey, what are you doing? That's outside of the filter. We said we used a subtractive method and we're here. You're over here. Get back in here." Yes and to everything you're saying, it starts to make some of the other things we're trying to do better, our focus, our strategies, our execution, all of that.

Jon Herstein (00:45:31):
Back to the area of advice, what do you see other ... You said you speak to a lot of peers in this space. What do you see folks getting wrong about how they think about AI and risk specifically? Too conservative, not conservative enough, something completely orthogonal to that. What are people not getting right? To me, it's something

E.A. Rockett (00:45:49):
Orthogonal. And the numbers are kind of weird, but the vast majority of people that I speak to, there's two things that I believe they've missed, but my experience is only my experience one human on earth. Number one, humans, before we wrote and were able to mass produce things and publish them, whether it be digital or physical, we spoke to one another. We communicated. Communication speech is one of the original human methods of conveying information. AI has made speech to text really good. Some people more prefer to go to a keyboard and type their stuff. That's cool. Everybody should choose their route, but there's a good population that these thoughts are going through their head and actually the keyboard is slowing them down. I'm one of them, by the way. If I just take out pick your tool of choice and I just start rattling off what is in my mind, a couple of magical things happen.

(00:46:47):
Number one, it actually does great transcription now because it's predictive technology. They can figure out what the heck you were trying to say, even if you pronounce it weird. Number two, you just did a dump of everything that's in your head. And number three, most of us don't talk illogically. So when you're talking, it's probably chronological. It probably is in a good sequence. It probably builds upon itself. It probably makes a whole lot of sense. Number one is I think that this transcription, this voice to text, as humans, we like to talk. I mean, look at the YouTube and the podcast universe. Look at which we're talking right now. We like to talk. It's a good way to learn and convey one another. I think number one, we should use that more. It's darn good at it. The second thing, a lot of people I talk to, they're like, "Well, I told AI what I wanted and then give me what I want.

(00:47:38):
" I'm like, "You think AI is tip of tree?" And this goes back to the experts. This goes back to the learning your craft. This goes back to the gifts thing we spoke about. I've tried it both ways. I'll ask AI to do something with no guidance of what's in my head. It may not be wrong. It's just not something I would send or use because it's not my point of view. It's not how I would even approach the problem. Conversely, back to point one, I basically just regurgitate everything that's in my head in whatever manner it is with all my mispronunciations, what have you, and say, "Hey, claim that up and make it concise." And I read that thing, I'm like, wow, I think people as my second point, so transcription is a valuable tool for those who are into that. But number two, it's almost like a sandwich.

(00:48:25):
Human begins it, AI comes in and human in the end because it's human plus machine. A lot of people are just going machine first. So those are two things that super resonate for me and have conversations with people about that all the time. Why don't you just begin on what do you think?

Jon Herstein (00:48:40):
Yeah. Well, I like the idea of being very conversational with AI so that you can get your thoughts out, have sort of a thought partner, if you will, help you refine, organize that kind of thing. And then when it's done, then you have an artifact that you then as a human can also say, "Does this still reflect what I believe, what I think, what I'm trying to communicate, what I want to get across?" So you're right, you start with you, have the AI help you clean it up and then you end with you. And I think that's a great sandwich to think about. Start with you and end with you. They're just coming out of the woodwork, John. Well, and I think we've been clear. I think most of us have been clear with our teams that even if you use AI as a tool in delivering the work that you're responsible for at the end of the day, you are so responsible for that work.

(00:49:26):
We don't delegate hiring decisions to AI. You might use AI to help you assess candidates or come up with questions or help screen, but at the end of the day, it's a person making that decision, right? It's not AI. And I think that maintaining that rule is very, very important in enterprises.

E.A. Rockett (00:49:42):
Absolutely. It starts with you, ends with you. It's the equivalent of when you do a keynote, right? If you cooked the direction, you cook the presentation, you get up there, you're feeling it, you nail it. If someone else cooked it for you like, "I can't do this. " Well, what am I saying? Just

Jon Herstein (00:49:59):
Repeat someone else's words, right? Exactly. Exactly. Last one in the advice category, what would you tell CIOs or legal leaders who are behind on AI readiness today? What should they do? It's 2026 now. They're behind. What should they do?

E.A. Rockett (00:50:14):
They should look either at their corporate card or their personal card and see how many dollars are going out each month for subscription to these foundational models in the potentially inverse way. The amount of money that I spend per month on these foundational models is like a car payment because I need more than the basic level, the free tier of any of stuff to understand what's going on to stay on the pulse of this thing. I need to really understand the landscape. I need to play with all these things. This is not something that someone on my team can brief me on. I need to get my hands dirty. Code meets paper that means that ... I mean, coding is now taking a different construct, but following that philosophy, right? Kind of the same thing. I need to roll up my sleeves. I need to get my hands dirty.

(00:51:02):
I need to be able to have those intelligent conversations. I need to be able to sit in a meeting. I did this a couple of weeks ago and say, "Hey, before we leave the meeting, and I know this wasn't the same yet, I just tried this yesterday, and if you tried this and we threw that for you, but it was like, woo." Get people

Jon Herstein (00:51:17):
Excited about it, right?

E.A. Rockett (00:51:18):
They get excited, but that is rooted and I made all these things accessible to me. So if you're talking about KPIs, my KPI is, if you're seeing a monthly expense that's going out of my personal pocket as well as my corporate card a month, what it indicates is I am desirous and I must have access to all these things so that I can play with all these things so that my IQ is raised. And then I have a foundation, everyone has their own experiences to truly engage with people, to compare notes to the water cooler thing like you're talking about, all of these things. And when I talk to other people in roles similar to ours, John, some of them, hands dirty up, roll up the sleeves are in there. Other of them are getting briefed by their team. I don't know how you get briefed by your team on the next cutting edge thing.

(00:52:09):
They can do it. Well,

Jon Herstein (00:52:11):
Especially if you're in a role like yours where you are the CTO for your organization, how can you not understand these things? And I would say we don't plug Box on this podcast, but I would say the approach we've taken with the ability for customers to choose different models, but still work on their same content without having to hand their content up is incredibly valuable because exactly to your point, I can take the exact same prompt, the exact same set of documents, and I can ask Gemini a question. I can ask ChatGPT a question, I can ask Claude a question and see how they compare in terms of things like accuracy, tone, et cetera. And I may decide to use different models for different use cases or multiple models for different parts of the same use case, and it's a very powerful capability.

E.A. Rockett (00:52:55):
I saw that, John. So fun fact, at Boxworks, I went to multiple masterclasses and I was sitting in one because I got to get my hands dirty, right? I got to get my hands dirty. I'm sitting in one of the masterclasses and I didn't know before the masterclass that I could do that in Box. And I think I might've said something out loud like, "Yes," or something like this. And everybody could have looked at me like, "What's up with that? " I'm like, "Nevermind, nevermind, nevermind." But that capability is amazing. People knowing about those capabilities, people attending as many masterclasses that Boxworks and/or others they can because I was a happy fellow that day.

Jon Herstein (00:53:30):
And by the way, on masterclasses, we are going to take that on the road. We're working on some stuff right now to take some of that same content because not everyone obviously could come to Boxworks, but I'm glad you did and I'm glad you had a good experience. Okay. I always love to talk about three things in my role and their sort of value, culture, and experience. So how are we driving value for our end users, consumers, customers, what have you? How do we think about culture and particularly change management? How do we get people to do things differently? And how do we think about the user experience that we provide? We think about that a lot in customer success. So I just want to do a little bit of rapid fire on these topics. And the first one is, how do you think about define and measure the value of Adobe's AI and legal tech initiatives?

E.A. Rockett (00:54:09):
People don't like to, and we don't want them doing tasks which are boring, dirty or dangerous. If the thing you're solutioning for hits one of those elements, that's pretty good. Let's talk about filter, talk about subtracting method. In our case, it's a lot of the boring stuff. And then when you're looking at the thing you're trying to solution, so that's like one of my axioms, if you will. Then I look at what I call QVC, kind of like the shopping channel, but the acronym is for something else, quality, velocity, and cost. If I solve this borrowing thing for you, can I solve it at the same quality that you would've did it, or if not better? Will I increase the velocity will get done quicker, and will the cost be cheaper, including zero? And so that's how I look at that first proposition. And I'm pretty analytical about it because I'm convincing myself during the process.

(00:55:03):
It's one of these things like software developers and small companies get into this like, is this level of effort worth it? Is it the level of effort worth it? Is it solving something that's boring, dangerous or dirty? And although only the first one really pertains in our context. And the second thing is, okay, do I have a reasonable belief that I can address one of quality, velocity, or cost?

Jon Herstein (00:55:28):
Right, maybe all three. And without going into the specifics, each one of those categories, QV and C can have associated metrics. So you can measure quality in a particular way. You can measure velocity in a particular way and the same with cost. So it gives you, you can start to turn that notion of value into a measurable thing. Exactly. Yep. Okay. So then let's move to culture and change. What has proven the most effective for you in building trust and driving internal adoption? It could be generally, but certainly with AI, are there specific techniques, tactics that you've taken that you found to be particularly effective in getting people to do something different than what they're used to doing?

E.A. Rockett (00:56:06):
Well, there's something I've come across, but I'm not sure it's me. When people join our organizations when they're junior or mid-level experience or even senior for that matter, but across the gamut of experience from expert to novice, novice, who hopefully will be an apprentice and become an expert, all of us say mostly because we're in the technology sector, a good idea can go from anywhere. And I fundamentally believe I've always believed that, you've always believed that. The people we've worked with have always believed that. But how did that transport into reality? Kind of hard because like, well, can you spin me up so many servers? And it was just a payload, sometimes infra, sometimes other dependencies associated with them doing that thing that a good idea comes from anywhere. And let's just be candid about it as we always are, the more senior somebody is, the more resources they had to enable to do that.

(00:56:58):
So although we said good ideas come from anywhere, people that were more senior with more resources without an organization had a bigger opportunity. This has become the equalize on two ways. When the people who are more junior in organization are like, "I have a great idea. Let me now show you because AI has quick cycles, better cost structures, all that. " They could show you, they can demo it to you, they can show and tell it to you. Conversely, the people who are junior who are like, who before would leverage the, how am I going to do that? It's like, you can do it now.

Jon Herstein (00:57:30):
Yeah. Well, I mean, we see this every day in our world. If you just think about building a custom demo for a customer before how you would do that, but the effort to do a truly custom demo for a customer was very high. Whereas now you're like, well, I could create a custom demo environment for every single customer that truly is custom that takes some of their content or content that's relevant for their industry and using these tools, I can show them something that's like, oh, that totally makes sense to me. I'm not showing a retail company, a legal example or a legal company. This is very, very differentiated now. So that's a great ... Showing that power I think is pretty important.

E.A. Rockett (00:58:05):
You touched something else, John, that I wasn't thinking about until what you just said. In the past, we would cook things because it took more time to cook and we'd have to explain, analogize how it worked for their use case. I'm actually saying the opposite now. Since you can do that personalization for what their problem is, they're like, "Wow, that's cool. That solves this problem, but that can also solve these other problems." So you're seeing the reverse now because we're able to show how it can address that problem more precisely. They're now able to understand the customers, the clients, the other business units, the other aspects of your organizations where they can scale that capability. So it actually went the reverse way in a positive way in my experience.

Jon Herstein (00:58:49):
And speaking of experience, my last category was experience and the experience we create for our customers or end users or whoever your stakeholders are. So let me ask you this final question. When you roll out an AI-enabled solution, what does success look like?

E.A. Rockett (00:59:04):
I'm going to go back to a previous answer. It is as it was. I think we got it right a long time ago. I think it was about engagement. I think it was about the time they were using it. This is way before social media, et cetera. We were always like, are they using it? How many times a day did they log in? How long are they on it? As rudimentary as that may be and as old as that may be, we move into social media, we're looking at the same things. As we move into AI, I'm looking at the same things because if we're trying a solution for humans, the KPIs of engagement, all the KPIs, monthly active users and daily active users, you name it, aren't as paramount as they always were, if not even more so because people have more options now.

(00:59:52):
So if they're actually using your thing more, because in your case, John, they can sit in Box AI and flip between my Audibles. Or in my case, they can sit and Firefly or sit in Acrobat and do all that stuff. Because options just keep showing up every day. And if your engagement is going up while options are getting wider, that's great for what we are all doing. And that's

Jon Herstein (01:00:18):
Success. And that's success. So now we know what it looks like.

E.A. Rockett (01:00:23):
Exactly.

Jon Herstein (01:00:24):
Well, Rocket, thank you so much. I think that's a wrap for today's episode of the AI First podcast. I really appreciate your time, your thoughtfulness, your experience, certainly your partnership over the years with Box. And we look forward to doing a lot more with you. I think this was a very interesting and unusual, perhaps rare look at the ways that legal and AI and technology not only coexist, but also actually reinforce and unlock each other. So I appreciate that. And I'd say to me, if there's a single theme that really stood out, it's that scale doesn't just start with these models. It starts with our own experts that the unique knowledge that we have inside our domains and then leveraging that given these capabilities, the trusted content, the frameworks like gifts that you shared with us, and the right guardrails that begin with ethics, which are not a constraint, but actually a launchpad for responsible execution and consumption of these technologies.

(01:01:19):
So thank you for all of that, all those insights. I think our audience and listeners will really appreciate that and learn a lot from that. So thanks again. And folks, if you'd like to hear more stories like this, subscribe wherever you get your podcasts. And if you're ready to bring secure AI-ready content to your enterprise, head on over to box.com/ai, you can see what Box AI can do for you. Thanks again, and we will see you next time. Thanks for tuning into the AI First Podcast, where we go beyond the buzz and into the real conversations shaping the future of work. If today's discussion helped you rethink how your organization can lead with AI, be sure to subscribe and share this episode with fellow tech leaders. Until next time, keep challenging assumptions, stay curious, and lead boldly into the AI first era.