AI is changing how we work, but the real breakthroughs come when organizations rethink their entire foundation.
This is AI-First, where Box Chief Customer Officer Jon Herstein talks with the CIOs and tech leaders building smarter, faster, more adaptive organizations. These aren’t surface-level conversations, and AI-first isn’t just hype. This is where customer success meets IT leadership, and where experience, culture, and value converge.
If you’re leading digital strategy, IT, or transformation efforts, this show will help you take meaningful steps from AI-aware to AI-first.
Karthik Devarajan (00:00):
We wanted to build how we have the Google habit, have this AI habit within the organization. We were thinking of how can we make this as a AI practice for all our staff? We started with the problem that everyone agrees it's a problem, and then you work on it. Show them, okay, this is what AI can do to help you with this problem and show success with that very small problem with a smaller group, show success and build a trust.
Jon Herstein (00:33):
This is the AI first podcast hosted by me, John Herstein, chief Customer Officer at Box. Join me for real conversations with CIOs and tech leaders about re-imagining work with the power of content and intelligence and putting AI at the core of enterprise transformation. Hello everyone and welcome to the AI First podcast where we explore how technology leaders are transforming their organizations with ai. I'm your host, John Stein, chief customer Officer here at Box, and today we're joined by Karthik Daraja, chief Technology Officer at the American Humane Society, the nation's first humane organization with a remarkable 150 year legacy of protecting animals from disaster rescue operations to farm certification programs, from service dog training to wildlife conservation. American Humane Society's work spans an incredible breadth of mission critical activities as their CTO. Karthik is pioneering how AI can amplify humanitarian impact at scale. He's transforming field operations revolutionizing how decades of institutional knowledge become instantly accessible and building AI capabilities that don't just improve efficiency, they literally save lives. In today's episode, we'll explore how a mission-driven nonprofit is navigating the unique challenges and opportunities of AI adoption. So let's dive right in with Karthik. Welcome.
Karthik Devarajan (01:59):
Thanks John. Thanks for having me.
Jon Herstein (02:01):
It is my pleasure. I would like to start Karthik with a bit about your role and the scope of that role. Chief technology officer could mean lots of different things in different organizations. So would you mind telling us a bit about your role at the scope and how long you've been with American Humane Society?
Karthik Devarajan (02:16):
2021. If you look at my career trajectory, my career has always been about using technology to make a difference. My first job, whether at Legal Aid, it's for ensuring access to justice or it can be enabling education through digital transformation at A CNU and now advancing animal welfare at American Humane. As you've known from the American Humane Society website, we do have six unique programs that kind of runs independently within the organization, and I manage the complete IT for all those programs.
Jon Herstein (02:53):
And I mentioned the sort of breadth of programs that American Humane responsible for. We talked about disaster rescue certification, the service dog programs, et cetera. How do you think about building a technology infrastructure that supports such diverse programs but still has a focus on a unified mission? How do you think about that?
Karthik Devarajan (03:12):
That's a good question, John. We do have, as you mentioned, different programs. Like we have our rescue team that gets deployed during hurricanes. We have our Fawn team that kind audits our chickens, eggs, turkeys, you name it. We have our conservation team that certifies zoo and aquariums, and then we have our military service dog team and we have our Hollywood team that certifies animals in movie sets. As I mentioned before, each of those individual programs run differently, but that is the most rewarding aspect of my job to kind of ensure providing a technology platform that kind of connects all these programs together within American Humane. And the only way to make that work is by building a connected system that brings all these programs together. So what I mean by that is, say for example, our finance team, they use Workday. Our development team uses Salesforce.
(04:12):
Our farm team uses a product called Intacct for audit and certification, but there is a central hub that kind of connects all these different platforms and where all the final data lives, and that's box. So it can be your rescue photos, your audit files, your reports from the field. It's all kind of stored in one central location, and that's the common thread across all the programs. So Box kind of helps us keep not just organized and easy to share, but also a secure location for all the programs to work on. So even though our team is spread across the country, we have offices in DC, Palm Beach, and we have staff working across the world. We are all working from within the same information that's stored within Box.
Jon Herstein (05:02):
Right. We're going to talk a lot about AI in this episode, obviously, but I am sort of curious before we get into the nitty gritty and the technical side of it. From a personal motivation perspective or even just a turning point or some epiphany that you had, what led you to think about applying AI to this whole world of animal welfare?
Karthik Devarajan (05:23):
My career has always been on using technology to make a difference. So the kind of turning point for me, it came when I realized technology and compassion, they don't have to be separate worlds. So when I joined American Humane in 2021, I saw an opportunity to use modern tools, not for convenience, but for impact. And you start implementing one model two at a time. And as we were implementing those modern tools did see a lot of impact affecting in a positive way the work that we do. So for specifically ai, when we launched box, all the other projects, we saw that we'd have thousands of records, lots of images, lot of certification file tied to the welfare programs, and we realized that AI could kind of help detect patterns and kind of reduce manual work. And what it ended up happening was it helped our staff to protect the animals more quickly, faster, and more efficiently. So my motivation is whatever innovation that are previously preserved for the private sector, like the innovation and the modern tools or even ai, so let's bring that into a space that kind of saves lives. For example, like at American Humane. So when you use AI more responsibly, you mentioned in the introduction, it amplifies the empathy. It also allows our team to spend less time on paperwork and more time on the purpose.
Jon Herstein (07:10):
I love that. Less time and paperwork and more time on the purpose. And you talked a lot about innovation as well, and in fact the American Humane, I think website describes the organization as protectors, advocates, scientists, and innovators. How do you all think about innovation in a sort of legacy, well-established organization as compared to, let's say, innovation in a startup? And is there benefit that you get from all that history? From an innovation standpoint specifically?
Karthik Devarajan (07:34):
Yeah, it's a very special place to work for American Humane Society. From an innovation standpoint, especially at an organization like us 150 years old, it's kind of different from a startup. What I mean by that is in a startup, you can move fast because you're starting from zero. At American Humane, we can move fast, but we also move with a purpose. So every new technology, new idea, new solution that we launch at our organization, it has to honor the trust and the goodwill that we have built over the last one and 50 years. So the advantage here is we have a very good foundation. So over the 150 years of its existence, we have numerous data point field experience and the wonderful relationships that we have with folks that most of the startup would kind of dream of. So when we bring something new, even AI or any workflow automation, we are not experimenting in the dark. We are kind of building it on real history and proven impact that has worked for last 150 years.
Jon Herstein (08:50):
Well, and speaking of impact, how do you think about or what is maybe most exciting for you right now, that intersection of the impact you can have and the capabilities of ai? You talked about being more efficient, being able to focus on the mission and less on paperwork, but what do you find most compelling and at the moment we sit in right now?
Karthik Devarajan (09:07):
So what excites me about AI right now, it's finally moving beyond a theory. It's becoming more practical in areas that truly makes a humane impact. So for a mission-driven organizations like us, AI can let us achieve the same level of precision and intelligence again, that are previously reserved for the fortune hundreds and the fortune tens, but we are using the same AI that these companies use, not for profit, but for impact. So to give an example, I think we've discussed before in an of our conversation. So we are using AI to analyze images from our rescue operations, from our farm and other places. We are analyzing big data sets from our farm audits, which is helping us to flag any kind of potential welfare issue more, faster and more accurately. So in a sense, ai, it's not replacing our experts, but it's kind of augmenting and increasing our expert's ability to protect animals at a larger scale.
Jon Herstein (10:21):
It sort of allows you to extend your reach, respond more effectively across all the different programs that you run. You give a couple of examples. Any others that we should be thinking about?
Karthik Devarajan (10:30):
So the biggest is our farm because that's where we have numerous data points that we collect. It's gotten to a point where the amount of data that we collect, it's grown beyond what a human can look into it. So we do need some solutions like AI to identify those hidden meanings, those hidden layers, the patterns that it's very hard for a few auditors or our backend office folks to look into it.
Jon Herstein (10:58):
So does it sort of help you filter down where the human should focus? You're still going to have humans in the process, I assume, but AI helps you sort of narrow down where to pay attention.
Karthik Devarajan (11:06):
Yes. Yes, that's correct. So we are not replacing any of our experts. I mean, you still have the humans doing it, but as the organization grows, as we add in more producers, as we add in more producers that we certify, and as we add more species that we certify, there's only so much we could do as a human on looking at those data and analyzing it. Yes, we can build reports, we can create dashboards, we can do any kind of visualization. But if you're looking something beyond that, what does it mean to be certifying a producer for say, broiler chickens or eggs? So those kind of analytics and analysis that are hidden behind the data that can be done only with, I'm not saying only with ai, but AI kind of helps us uncover those information to the front.
Jon Herstein (12:02):
When you say AI in this context, I assume you're not talking about just generative ai, but also machine learning and data sciences as well, or what's the full scope of what you mean by AI helping in these tasks?
Karthik Devarajan (12:13):
So it's definitely not just a generative ai. So if you've looked into our audit checklist, I mean if it's publicly available on our website, it kind of tells you the list of questions that our auditors go through when they're certifying a facility
Jon Herstein (12:27):
And
Karthik Devarajan (12:29):
You do it over a hundred times per producer, say for example. And then we do have a hundred producers, and so a hundred times a hundred and each of the checklist is about a hundred plus questions, and you can quickly see how much data points that we collect per species. We do analysis of what patterns are we seeing, what potential non-conformances that we can expect given the past data that we have in our system. So those kind of things. So we use AI to make those inferences tell us where we should be looking for to ensure that the producers that we certify, they do certify without any compliance issues. So it kind of helps us being more proactive with our data.
Jon Herstein (13:15):
Right. I'm curious, maybe a little more tactical question, but with all this opportunity to make these processes more effective, how did you decide which workflow to pilot first? Was there a specific set of user pain or data opportunities that made it the right proving ground for ai, or how did you go about that systematically or subjectively or what was the approach?
Karthik Devarajan (13:37):
So when we started thinking about where to pilot our first digital workflow, we're looking for a couple of things. One is identifying a problem that everyone agreed that it is a problem, this workflow, this process, it's a problem, it's time consuming. One, we do it again and again and again, and there's a lot of manual intervention. So these are the two kind of basic things that we are looking for. We wanted to ensure that we start with something small,
(14:08):
Something that everyone agrees that it's a problem and something that our staff is doing on a more repeated basis. Same things week after week, week after week with a similar set of data. So something small, not overly complex, something they're doing it in a repeated fashion and something that the stakeholders and it agree that it's a big pain point. Once we do that, we involve the stakeholders responsible for that particular process, kind of show them where AI can add value. So we were able to show some kind of immediate improvements like fewer errors, faster reporting, and even better visibility for the leaders of those programs. So for our Hollywood team, our backend office folks, they provide the leadership, what you call as outreach reporting. And what we mean by that is we do work with many production houses to certify animals in movies, and every month we send them a bunch of emails back and forth with them regarding animal welfare issues, like questions about our standards and things like that and the current processes.
(15:21):
We take that information out as a report from our system, which is used for certifying movies, take it out, put it in a super large spreadsheet, and then if the leadership wants a say for example, what is the outreach summary for production X? And then the staff are going through the spreadsheet, looking at all the information, looking at the hundreds of females that are being sent, rewriting them in their own words and submitting them as a report. So once we have that in the spreadsheet, so what we are having AI do, it's specifically box AI is the report gets automatically taken from our system, it's inject into a folder and box that's part of a hub, and then the head of the program, or in this case the Hollywood program, they can ask or prompt things like give me a summary of what happened with production X for the month of March. And then box AI looks into the spreadsheet, reach through the number of vitamin 30, 40 emails that are part of the Excel cells and then summarize that in a format that we have trained it to do it. This has significantly cut the time it takes to provide these report from a big data set into a form that's standard that's consistent and more quicker.
Jon Herstein (16:44):
So that's about speed of execution. I was curious about some of the before and after metrics. When you look at some of these pilots and even things you've put into production already, what are some of those key metrics, response time, audit, throughput, grant reporting, speed? How are you demonstrating tangible AI impact? You gave one example, but what are some of the other things that you're measuring?
Karthik Devarajan (17:05):
So the biggest metric is time saved. Our hope is when staff is spending less time on reports, they spend more time helping and saving animals. So that was our end goal. So for this Hollywood program, it's clearly a 90% increase in efficiency in terms of how much time it's taking them for them to consolidate their information rewrite and put it in a report format. Now it's just a simple prompt and that's consistent, that's standardized, and we know that there's a significant improvement in time. In other cases, it's more efficient because we never had to do those kind of tasks before. So there's no benchmark that we can refer to. So we didn't have that information before, but all we know is AI is taking us into a different avenue where we didn't have those data before, but now we do have that, and what can we do to enhance the program that we have to wait and see because we just started it very recently. So it'll be good to revisit that next year to see how these hidden data has helped the program.
Jon Herstein (18:13):
So is there an example of one of these processes that you just literally couldn't do before and now AI enables you to do it?
Karthik Devarajan (18:19):
It's not like we were not able to do it before. We were able to do it before, but it was taking significant amount of time. Again, the example I told you about the Hollywood team, it's a super long spreadsheet and if you know copy paste emails in an Excel cell, you see how convoluted it gets and staff was just crawling up and down and trying to make sense of that email, understanding them and then rewriting it in a way that's digestible for our program heads. So they were able to do it, but it was taking significant amount of time and with AI, after a few iterations, we were able to get to a point where it's a simple prompt away.
Jon Herstein (18:59):
I mean, it sounds like a really important North star for you is the saving of time for staff, so that going back to your early point, they can really focus on the mission and less on mundane, repetitive error prone tasks. You mentioned one use case, but another one I think we've talked about is you're leveraging box AI for the storage of policies and playbooks for employees so that they have instant access to things. I assume that's also a time saving measure. Can you talk a little bit about what you've done there and how it's helping the organization?
Karthik Devarajan (19:30):
Sure. So we recently moved to a Workday platform. The old financial system that we had is about 25 years. The last time we did an upgrade, like this was in the year 2000. So it was a system that the organization has been using for 25 years, and then we moved to Workday and obviously the processes change, the workflow change how people submit expenses change. So there's a lot of changes that was brought in and we have staff who have been in the organization for so long that bringing in a change, this was kind of making things a little bit difficult for them. Of course, we did have training and handholding and workshops for staff on how to use this new system, but what we have seen is not everybody needs a full 60 minute training or workshop. So what we have done is all the documentation that we have, things like how do I submit an expense or how do I use the Workday app, mobile app, how do I upload an invoice?
(20:36):
How do I approve an invoice? What happens, the invoice is wrong? All those kind of things. We documented and we put it in a box hub and basically shared the hub with the entire organization. So staff doesn't have to wait for the training or read through multiple documentation. You can just go in and if I'm a new staff at American Humane, I can just go into the hub and say, Hey, how do I submit an expense for the organization? And what it does is it gives a good summary of, okay, step one, go here, step two, log in with your American human credentials and whatnot. But on the bottom, it also points to the documentation where it's referring from. So the staff knows that, oh, this is where it's referring from. They click on it, they go open the document, they either bookmark it or they print it out, whatever's convenient for them. So in that particular use case, you're giving our staff different option. You want to be a part of a training, you can do the training. We have recorded videos. If you want to take a video, you can do it. Or if you have a specific question, go to box Up has the question, get your answers and then move on.
Jon Herstein (21:45):
And that citation functionality that we found to be particularly important as well, we do similar things with our policies and sometimes you just want the answer, but other times you actually want to read the policy and really deeply understand what the expectations are, particularly if you're a newer employee. So that's a really nice capability.
Karthik Devarajan (22:01):
Our staff also like that because it kind of builds a trust. So they know that, I mean, our staff now understands the hallucination and how AI is making up things and so on and so forth. But once you have a reference there and it points to a document that's within your environment, it kind of gives a trust that, okay, it's not pulling things from air, it's actually referring this document and the users can click on it and go to the actual document from where AI is getting that information from.
Jon Herstein (22:28):
Yeah, absolutely. Yeah, I think that trust is a really key part, especially right now when AI is still relatively new to a lot of people and they may not feel that sense of AI is correct all the time. And having that source material is really, really important. I want to shift a little bit and talk about leadership and even culture at American Humane and how you've kind of navigated this set of changes. And specifically a couple of thoughts. One is, what strategies have you employed to help you move from these one-off discreet AI experiments to really a broader organizational capability than endures in terms of taking advantage of ai? Maybe you're not fully there yet, but just sort of how do you move from Yeah, we're playing around, we're experimenting to like, this is actually how we do business.
Karthik Devarajan (23:10):
For the first thing, we didn't call it like a project 20 years, 30 years, 25 years back when Google came in and when people were able to post something, ask the question to Google, and then Google gave all these answers, and now it's kind of a commonplace the organization, people know how to Google. So we wanted to build how we have the Google habit, have this AI habit within the organization. So we didn't make it as a one project. We were thinking of how can we make this as a AI practice for all our staff? So as I said before, our first thing is starting really small. We started with the problem that everyone agrees it's a problem, and then you work on it, show them, okay, this is what AI can do to help you with this problem and show success with that very small problem with a smaller group, show success and build a trust.
(24:09):
And once you build a trust and once that program staff knows that, okay, AI is actually helping them, then you kind of create those, an internal champion kind of thing, and not an AI guru, but like an AI lead for that particular program. Not a coder, but someone who understands the workflow, someone who understands the nuances of the program and make those, create those internal champions within that program. And then once we have these smaller wins across all the program, and then we talk about scaling it, okay, we fixed this problem, we build the trust on ai, we do have AI champions across different programs. Let's see where we can take a one notch about and expand it and scale it a little bit more. So that has always been our approach.
Jon Herstein (25:01):
I love that idea of AI champions. Did those sort of come about organically or did you handpick people or how did you wind up with a set of champions?
Karthik Devarajan (25:10):
I mean, ours is not a pretty big organization. We are pretty small. So depending on the programs, it's anywhere between three or four people to maybe 10. Usually there's one or two staff. If it's really a big problem that's making their job more inefficient, they usually come up with it. They say, Hey, you know what, Karthik, this is the problem that we have. This is something that we do that's consuming a lot of our time. It's taking a lot of our energy. We are spending more time on this paperwork. Where can you help? And then we pick the staff and then work with the staff and then organically go grow from there. So we don't pick, but per se, but usually it's the staff who comes up with the problem first and then we work with them and make them as their internal champion.
Jon Herstein (25:54):
Got it. And it sounds like the champions, it's not really a new role per se, it's just people who have a certain mindset that can also then help advocate for some of that change in the organization. But I am curious about what emerging skill sets are you seeing on the team? Again, either happening organically or that you've been very sort of thoughtful about, deliberate about creating, what are those new skill sets that you're trying to cultivate in the organization?
Karthik Devarajan (26:19):
So what we have seen so far with AI being launched for almost a year now at American Humane, people are now thinking of what needs to go and in a document or a PDF or picture so they can get the output from ai. So they understand the program, they understand how things work, but they also not thinking about, okay, this is what I am expecting from ai. So if I wanted to get that output from ai, what needs to be going on into the file so that AI can be more accurate? When we are working with our conservation program, again, we're doing something very similar to what we did with our Hollywood program. They show us their internal spreadsheets about where the data is stored, and if I show you that spreadsheet, it'll not make sense at all because it has some random data about when it's certified, when it's the payment schedule and things like that in different columns.
(27:18):
But it was not particularly consistent. And then of course when you try to feed that into ai, obviously if you as a human cannot understand or comprehend how those data connect with multiple files, of course the AI is not going to help. And we have seen that even during our proof of concept with this programs is that if your data is something that even you cannot understand, you can't expect AI to make sense out of it. So what we are seeing, to answer back to your question now, people are understanding, okay, if I put this information in a format that they cannot understand, then obviously they're not going to get the output that they want from ai. So previously we were training people on how to have this departmental folder structures, how to create these folders, have this role-based access and whatnot. Now our staff is very confident about how to do that. Now the focus is on, okay, we have a structure, what can we do or how can we change what information you put in those files? So the output from AI is more accurate. So that's the kind of skill we are seeing, slowly seeing from staff, are they thinking in the direction?
Jon Herstein (28:25):
Right. That's fascinating. Has that caused you to rethink or reimagine the role of knowledge managers? Do you have formal knowledge managers in the organization or are people evolving to that?
Karthik Devarajan (28:35):
The people are evolving to that. And of course we do have an applications manager within our IT team, and her job is basically again ensuring working with these teams to ensure they use AI more responsibly, more correctly and where they can change the process, change how they work, so they better utilize ai. So it's kind of a collaborative effort and learning as we go as well, because these folks have been using this process for so long and it becomes really difficult. Go in and tell them what you have been storing information in a certain way. Now we have to change it in a completely different way just so that AI can work it, right? So it's kind of very difficult to kind of convince that way. So that's why when we were working with this program, we start with a small program and we have shown them the value of ai so they know it's going to make their life a lot more easier. So convincing them becomes more easy because we have already shown them some value
Jon Herstein (29:37):
Right now they understand the why. I am going to come back to the topic of ethics that brought up, but before we go there, just to kind of round this one out, is there a set of people who own your prompts and sort of validation of what the AI produces tweaking or training of the results? Is that centralized at all or is that distributed? How do you think about that? It
Karthik Devarajan (29:54):
Is centralized when we are doing this POC obviously, where we wanted to ensure that the data that AI generates is accurate. So we do a lot of validation centrally from within the IT team. And then once we give that AI agent to the program, we have them test it and then have them confirm and double check if the output that AI is providing is accurate, and if not, they work with it. We work collaboratively to refine the prompts, check for accuracy and so on and so forth. So it becomes a more collaborative effort initially. And what we are seeing is after a month, month and a half of usage, once we know that they're not coming back to IT with any accuracy issues, we know that okay, this is there, the prompts are correct, it's linking the right document and then we just leave it at that point.
Jon Herstein (30:51):
Fascinating. So it sounds like it's evolving, but it's very collaborative. It's not just the responsibility of the business or IT, but really both working together and there's a fair amount of iteration on that.
Karthik Devarajan (31:01):
Definitely. I mean, we have about eight or nine agents in production that we use, and none of those were done in one or two iterations. It never worked that way.
Jon Herstein (31:11):
It's really interesting, you said eight or nine, we've heard of organizations who have hundreds or thousands of agents and you start to think, how do you get your head around that? I love this. It's more incremental, but probably much higher impact and more highly validated that these eight or nine agents are doing exactly what you want them to do.
Karthik Devarajan (31:26):
I mean, if we want, we can create a hundred agents in the fly, but our objective is not how quickly we can deploy them. It's how much value this agent is adding to the team and how it's helping our staff with their work. So we are taking a very cautious approach, and this one, again, speed is important, but the impact is even more important for us. So we are taking a more cautious approach. We don't want to deploy something for the sake of deploying. I know AI is a big word everywhere. So organizations are trying really hard to fit in ai. Again, we don't want to get into that. We want to be more authentic with our AI usage, very genuine about it and clearly show value. So we are not just doing it for the sake of doing it, but we are doing it with a purpose.
Jon Herstein (32:11):
I love that balance of the speed of innovation, but also applying a rubric to things which is not just could we do something, but should we do something
Karthik Devarajan (32:20):
That's correct. I mean, as I said before, we are 150 year old very reput organization in this country. When you are exploring things like AI or data capabilities, we have to ensure that, okay, will this ai, is it advancing our mission? Is it protecting the trust our donors and partners have in place for us? And is it helping the staff who's helping with the welfare of the animals for the people who take care of them? So we do have a very specific intention in our mind we can do it, but should we do it and is our intention to do it is the right intention, not just for it, not just for the program, but the organization as a whole
Jon Herstein (33:03):
In addition to supporting the mission and making the work more effective, more efficient for your folks that they can focus on the mission. You also have a set of core values that include compassion, kindness, and hope that have really been guides for American Humane for over a century. So how do you ensure that these new AI capabilities are staying aligned with those core values?
Karthik Devarajan (33:24):
So before we bring in any AI to a game, he ask the simple thing, does it help people care better for animals? If the answer is no, we don't do it, we could do it. But if the answer is no, we are not going to waste time doing it. So we are very, very intentional about keeping humans in the loop because we still want our experts. So when our farm team looks into our big data sets, we want the experts in our farm team to still make the decision. AI is just giving them a better insight. So if you're analyzing welfare data or if AI is helping us respond faster to a rescue, the goal is still the same. I think you use this term during the start of this conversation. So we are using technology to amplify kindness and not replacing it. So that's how we make sure this AI project that we are working on, it's serving the mission of the organization.
Jon Herstein (34:26):
Right. I love that. Amplifying right than replacing. I want to pivot now to some of the challenges that you face. We talked about a lot of the goodness that's here and how things have evolved, which is great, but what have been some of the toughest data or process challenges that you've come across, whether it's taxonomy, drift, privacy constraints, integration, headaches, what have you run into that you're comfortable talking about and how did you go about fixing it or addressing it?
Karthik Devarajan (34:51):
In our farm program, we use a product called Intacct. The most ideal solution is box AI talking directly to the database in the backend and give those insights. But what we are doing right now is you're creating a report on the system, exporting it out and have a script that pushes to box, and then the AI comes on top of that. So there are a lot of manual work that has to happen to get the insights that we want, but again, AI is evolving, integrations are getting more easier. We are still in our early stages, so we are hoping one day we can kind of automate this whole thing in terms of data privacy. We brought in the legal team even before we started the project, we told them what we are doing on what are all the risks and challenges and things like that.
(35:35):
And actually we did speak with your general counsel at Box, even before we got the licenses for the ai, our legal hacks questions about privacy, data governance and things like that. The questions are answered by your general counsel and then that gave confidence to the legal team and they said, you know what? Okay, we are good on our end, just go and start implementing it. So we included the significant stakeholders early in the process and that helped us navigate the challenges specifically on the privacy and the legality of it. The integration I taught you about it, it's still a challenge because our data is in multiple different places and then we have to bring them back to box. That's still the manual work that we do. But again, as I said, we are still in the early stages. There's always room for improvement and we will get there one day.
Jon Herstein (36:24):
Yeah, I agree. And I do feel like this integration problem is going to get a bit maybe a bit better in some ways, maybe a bit more challenging than others where rather than having to write code that connects one set of APIs to another set of APIs that we're going to have agent to agent integration, that may feel a little bit more natural to build and develop and test. So I agree with you. I do think it's going to get easier as more of these capabilities are exposed to agents. We talked a little bit about the format of your content, but did you run into any issues around data quality that may have had an impact on the value that you got from your AI on top of it?
Karthik Devarajan (36:57):
So the biggest challenge we had was when we worked with a smaller data set and when AI was giving insights into our data, how do we verify that it's its correct and who's doing it? That's the biggest challenge. So if our legal team ask us, okay, AI is giving all these insights about the data for say broiler chickens, how do we go back and ensure that the insights that AI provided is actually correct?
(37:26):
So that was our initial challenge. So we spent a lot of time ensuring the output that AI is providing is accurate and if it is not change where the data lives within the document and kind of play with it and see how AI was responding. So that's why we needed all these multiple iterations because initially the answers that we were getting were not the most accurate ones. Again, the reason is not the AI was failing, it's because of how or what information and what context we give in the data on the file that was missing. So since we understand the business, we have institutional knowledge, we understood that ai, you got to be very explicit about it if you want a more accurate information,
Jon Herstein (38:14):
Definitely learning and iteration as you go. We'll talk about that a bit more, but you also mentioned having to convince some folks in legal around privacy concerns and so forth. Any other notable skeptics? You don't have to name names, but just organizationally, were there skeptics in the org that you had to convince and how did you convert them from skeptics to advocates?
Karthik Devarajan (38:33):
Our goal was not have this AI project, we want this as how people are using Google now. We want our staff to be using AI in a similar capacity. So we started selling ai. We didn't go to leadership meetings and say how AI is great and what it can do and how it can make your life better and make your work process more efficient. We didn't go with that approach. So the skeptics we had was what our staff read from the news
Jon Herstein (39:05):
They
Karthik Devarajan (39:05):
Say about how AI is replacing humans and how AI is hallucinating and how the information that AI providing is not accurate. Then you also see some lawsuits about how AI is being used in hiring process and things like that. Those are the things that's beyond our control. People read those information online, they hear it from their friends, but what does in our controllers, as I said, tell me your specific problem and see if AI can fix it and show them the results, show them the proof. That's how we can, or we could win over the skeptics because if we just keep talking about AI and not show them the actual value, they going to continue to be skeptical with ai. That's why we involve the stakeholders. We have them tell what problem that AI can solve and then work with them to fix that specific problem and then show value once they see the outcome. Once they know, you know what, AI is not all bs. Yeah, definitely. It's making my life easier, and that's how you win over there. And once you have champions throughout different programs and then the people talk to each other and one person showing this to another person, and then you have people coming into it and say, you know what? I heard you did this for John Doe and this particular team, can you help me with this? And then you make it grow more organically.
Jon Herstein (40:30):
Yeah, I love that you sort of build that, you were talking about trust earlier, build that trust through success even if it's one at a time and then it starts to grow and then you have people coming to you and saying, can I have some of that? Right. Which is great. Were there any early pilots that if you don't want to answer, that's fine, but any early pilots that you felt like just didn't work and you kind of killed them and were there things that you learned from those that affected how you did things going forward? From there?
Karthik Devarajan (40:55):
Unfortunately, I was also one of the person who got too excited too soon. When this whole AI thing launched, I wanted to do everything because I saw I was able to again, analyze thousands of records in few seconds. I thought, okay, let's take a data dump from our existing farm database and then have AI analyze that, and that was a problem. So as soon as you reach like 10,000 rows of data, it's not meant for that. It's specifically box ai. That's what we found. So we got a little too excited with that. That's when we realized that, you know what? AI is good. It's good for small problems, small data sets to make it work to show value. Just don't start importing 500,000 rows of data into AI and have it analyze it. It's just not going to work. It's not going to be accurate, and it's just going to time out and time out multiple times and then you would just end up giving up.
(41:56):
There are other ways to accomplish that, but just using the out of the box box CI that you have with our license, it's not the way to do it. So we got jumped into it too soon too quick. We got really excited. We tried to work with a huge data set from the get go, and then again, very quickly how quickly we started. Very quickly we found that that was not the right way to do it. So we really stepped back. We spoke with other organizations on how they carefully launched AI and then learn from that. And then that's when we thought, okay, let's start with a small problem, show value and scale one step at a time.
Jon Herstein (42:33):
Thinking carefully about the scope. I'm just thinking about advice for others, thinking very carefully about the scope of the problem that you want AI to tackle. I would say too that part of this is driven by the size of context windows that have obviously gotten bigger and bigger over time as newer models have come out. But then that's leading to a different issue, which is this notion of context rot where if you give the models too much information, they kind of forget things, they lose track of things. And so just being very, very thoughtful to your point about the kinds of problems that are appropriate to solve with generative AI versus some other technology from a vendor perspective, and again, no need to name names, but are there any lessons learned in terms of choosing vendors, how you evaluate anything you would redo in terms of your decision matrix or vendor decisions?
Karthik Devarajan (43:16):
I mean, we have to understand where AI fits into your technology landscape. We wanted things to be as simple as we could not make significant investment in integrating 10 different systems. So we understand our tech data stack, and that's one of the reasons we made box ai. Our primary AI solution is because we have already invested in ai, we didn't want to bring in either Chad, GP or copilot and then have an integration between those two systems. So copilot can read the files from box ai, and then you have these integration challenges where it takes time to index. That's another week. Any new files that you put in indexing takes another 40, 24 to 48 hours. It doesn't show up in the chat prom. So we kind of wanted to avoid all that issue. Keep it simple. You have a tech stack stick with it. I mean, of course evaluate multiple products, but if your existing tech stack is working and if AI is offered as part of it, just go with it. Make things easier, especially for our organization like us, we have a very small IT team. We don't have developers and coders to troubleshoot integration issues and things like that. So wherever we could make things work out of the box and a solution, again, of course with more accuracy and ease of use and box AI kind of checked all the boxes and we were able to pick that as our AI platform for American A.
Jon Herstein (44:50):
So you're saying leverage the tech stack that you have, that you've built confidence and trust in it as is and enhance it with AI as opposed to we'll start over or do things completely differently. You've got some things there that you know work and you should continue using them,
Karthik Devarajan (45:04):
And it makes training easier too. When we launched Box, we spent almost a year and we still do it. We still have a monthly box, a workshop for our staff, but if AI is just layered on top of that, it becomes learning a lot more easier, giving them a completely different solution that has a completely different interface. You have files in own location, your database in another location, your proms are in a different location. It just makes life more confusing for our staff, their life, as busy as it is. Why complicate that through technology? Keep it simple. If something is working for you, just stick with it. That's our logic behind making AI box ISR platform.
Jon Herstein (45:45):
Yeah, I love it. The keep it simple kind of mantra is pretty critical. I can't remember where I came across it, but you had this great quote where I think you said audits without AI will soon be considered negligent. What did you actually mean by that and what needs to be true to make that not an issue?
Karthik Devarajan (46:00):
Okay. So what I mean by that is, again, I've built the organization for four years. I've seen how much our data has grown, especially on the animal care data, so that the volume and the complexity of the data that we are collecting, it has completely outgrown what any human can reasonably review alone.
Jon Herstein (46:20):
So
Karthik Devarajan (46:20):
This is so much data, there's so much complexity.
(46:23):
See, an auditor can maybe see a hundred data points during a farm audit, but AI can scan thousands and thousands of our temperature logs, our feeding schedule, our sensor readings all in seconds, and it can also highlight anomalies before we are missed. So you cannot just leave it just to the auditor because the volume and complexity is so much. So if you ignore that insight, which AI is providing, it kind of becomes your liability because what it's saying is that you are now accepting any kind of blind spots that you could avoid it. So AI ready audits, it's not kind of replacing the expertise, it's kind of giving our auditors and our backend office folks like a cheat code or like a superpower so they can look at those patterns and anomalies with more accuracy. That's where I'm thinking if you don't have AI audits, you definitely need, because the data has grown significantly over the years and it's become extremely complex.
Jon Herstein (47:28):
Yeah, I love that idea of it's a superpower for humans, and if you don't use it at some point, you're just not doing your job if you're not taking advantage of that capability. Sort of the topic of advice for others, let's imagine you had a peer, another CTO at another nonprofit organization who felt like they were behind and they came to you and said, Hey, I want to get something going next week. What should I do first? What are the first three things I should do? Monday morning, I come into work, I want to start doing something useful with ai. What should I do?
Karthik Devarajan (47:57):
Go very small. I'm very positive. I've been with nonprofit for 15 plus years. There always, there's one workflow. You just have to ask people that everyone agrees that it's painful. So identify that, and once you identify that, look at what you can do to clean and centralize the data behind that workflow, because as we discussed before, if your data is not clean, AI is just not going to be as accurate as it can be. Do a 30 day pilot and if it works, you would document the step and show to the other program and tell them, okay, this is what we've done for this unit. Going back to your question, start small, pick one workflow. Everyone has to agree that it's a pain point. Everyone agrees that fix the problem with through AI and then take it one step at a time.
Jon Herstein (48:50):
Right. Well, it's interesting because you're saying don't go for the moonshot. Go find something that's actually pretty mundane, but that to your point, everyone knows that is a painful process. It could be better, and that gives you a quick manageable win that you can build from.
Karthik Devarajan (49:06):
Yeah, just don't go for big success from the get go. Pick it once a little bit of time.
Jon Herstein (49:11):
Well, I don't know how controversial that is, but I'm curious, what is your most controversial belief about ai?
Karthik Devarajan (49:17):
My controversial belief, and I've seen my peers saying it when I meet them on events is that, oh, I don't have the capacity to do it. I dunno. I dunno if that's true. Maybe there may be a limit of truth to do it, but if you say there is a capacity issue or there's a lack of staff to do it, I mean it's available. I mean, you can start with even the free version of Chan g pt, right?
Jon Herstein (49:42):
Right.
Karthik Devarajan (49:43):
Ensure that whether it's actually a capacity issue or you or your staff are being resistant to change, and most of the time they just blame it on capacity or funding or whatnot. But if you dig deeper, it's actually nobody wants to change it. I mean, they love the status quo. Everything is good. Why bring in a change? That's my controversial thing. The second thing I've seen even a lot of it, directors, administrators and so on, especially in the nonprofit space, talking about banning AI like Chad, GPD or copilot or anything with the fear that our company data might be leaked, there is a risk there, but just because you have that risk and just because it's one random person who's going to do it, don't just disable it on an organizational level. So you're just taking tools off from your staff that's going to make them more efficient. So come up with an open mind. If there is a risk, create a policy, train your staff and launch AI more organically rather than just saying, you know what? It's a big risk. It's a problem. I'm just going to just ban it. I just wouldn't do that.
Jon Herstein (50:54):
So manage it more through policy and trust than through making it unavailable to people. I love the first one as well because, and probably particularly in the nonprofit world, everyone's resource strapped, so it could always be the excuse to say, well, we just don't have the resources to go pursue that. And you're saying there's always resources if you think about it from the perspective of prioritization, what you should focus on and the benefit you could ultimately get. And so sort of challenge that assumption that you just don't have enough resources to get something done. And maybe sort of further to that topic, I want to talk a little bit about value, and there's sort of being an ROI at the end of that. So if you want to convince people to do something different, you want to convince people that they should reprioritize, you've got to be able to talk about the value that comes out of it. So how do you define the return on investment, both in terms of the mission outcomes, we talked about things like time saving so that you could focus on the mission, but also things like just operational efficiency or anything else. What is ROI for you?
Karthik Devarajan (51:47):
Our simplest metric is time saved. So how many minutes or how many hours our staff are spending time on building those report or searching certain information or reformatting certain things. Our metric is time saving. We don't have an actual data where we measure that, but just based on our pilot and now it is in production, we have seen upwards of about 70% in time saving for that particular task. But from an organization standpoint, the most important ROI is kind of mission driven. So what I mean by that is now if staff are spending less time on searching and formatting and repeating the same thing, what it means is that now they can make decisions more faster or act more quicker for helping animals and providing safe conditions for our responders. So it's just not just dollars, it's how technology is helping us to make compassion move faster.
Jon Herstein (52:50):
So
Karthik Devarajan (52:51):
It's still tying, but how fast can we move from a compassion standpoint?
Jon Herstein (52:54):
What is your criteria for saying, okay, we did that. We did that pilot, we validated it, we're saving time, we're done. That one's good, and we're moving on to something else. And then how do you celebrate those early wins? You talked about the champions, but what else are you doing to celebrate those and say, this was a great success, let's do more of that.
Karthik Devarajan (53:14):
Based on our experience with launching this agents, when we did the pilot, one of the feedback that we got from our folks in which we have seen ourselves is AI is not giving consistent answers. Every time we redo the prom, we keep more explicit and then we fine tune and keep fine tuning, so on and so forth. And at the point it gets where you just stop hearing from your program team, which means that now the answers are more consistent, so the agent itself is more stable at this point. So once that's done, then we move on, understand the learnings from that and apply that to another program or another project. So again, we don't rush to scale and reiterating, again, we are not doing an AI project. We are creating an AI habit. We want our staff to use ai, like how people are using Google a lot right now. So if you want to do that, you have to work with them very closely, more collaboratively the first few months until things get more stable and then you just move on.
Jon Herstein (54:20):
Makes perfect sense. Any other quick tips for people on the change management aspect of that? So for people who maybe are a bit more resistant, not your early champions, how are you helping getting your users to actually embrace these new ways of working?
Karthik Devarajan (54:33):
There's only so much we could do, but what we have changed is whenever we are doing a town hall or an all staff training specifically on ai, we just don't show them the features of ai. Oh, you can do this with ai, but we take a very real example from within the organization and we have the person and from the program be on the call and be our advocate for it and show them real time, real demo, how it's giving value for this particular person. So once staff sees that, people respond again to more an outcome like what it's doing versus just simply showing them, oh, we have launched agents for all. This is how you do it. Step one, you do this, step, you do, and then you just give them a worksheet and then just move on. You're not going to win over your staff. So we changed our approach. Give them real examples, show them live real demo with real data, with real people, and then that kind of helps us.
Jon Herstein (55:39):
It's not hypothetical, it's not coming from it. It's actually people in the organization, in the program who are showcasing these things. And it's not about features, right? It's about improvement to some process.
Karthik Devarajan (55:50):
That's correct.
Jon Herstein (55:51):
Yeah. That's great. Okay. I want to have one question. Looking forward, you've done a lot already with box ai, but what are you thinking about for the next six months? What are you excited about? Is there any pilots teed up or anything you'd like to do that you just haven't gotten to yet?
Karthik Devarajan (56:06):
I think I've briefly talked about it. Our farms are our biggest source of data. There's so much data point from an animal welfare standpoint that we are not fully tapping into. So we wanted to have something AI assisted audit, AI powered audit companion kind of thing, where you have the data from our Intacct platform, flows into a box ai, and then this AI agent can look at the inspection photos, the checklist, past reports, the nonconformance reports and things like that. And then providing a summary for our auditors and our back office folks based on the data that's being fed into box I mean, right now we are doing that with a lot of manual steps, but we hoping to kind of automate that and then have box AI proactively telling us, okay, these are all the finding summary, these are all the corrective action plan. I'm basing this on all the files that have been stored here, and then linking each point back to this original file so it understands our process and then gives more insight into the data that we are manually doing it. So I'm hoping things get a lot more easier from an integration standpoint, make it more automated and have AI be a champion for animal welfare.
Jon Herstein (57:29):
Well, we are very, very excited to work with you on that pilot, that project and all that follows from there. There's definitely more coming from the standpoint of integrations, automations, making it more connected, making box ai, more connected with other ais, other agents, and we're very, very excited to be on that journey with you.
Karthik Devarajan (57:48):
Thank you, John. Thanks for having me.
Jon Herstein (57:49):
Thank you so much, Karthik. We really, really appreciate all of your insights and really helping us understand how American Humane Society has been leveraging AI to advance the mission of protecting animals. And I found your perspective on building technology capabilities in 150 year old organization while continuing to maintain your core values of compassion and trust. I think you provided a lot of valuable lessons for leaders across all sectors, not just in the nonprofit space to our listeners, whether you're in the nonprofit space or leading technology transformation anywhere else, we really hope this conversation has sparked new ideas about what's possible when innovation meets your mission. And if you enjoyed this episode, please subscribe to the AI first podcast and share it with your network. Until next time, this is John Hursey reminding you that the future of work isn't just intelligent. It can also be deeply humane. Thanks again. Thanks for tuning into the AI first podcast, where we go beyond the buzz and into the real conversations shaping the future of work. If today's discussion helped you rethink how your organization can lead with ai, be sure to subscribe and share this episode with fellow tech leaders. Until next time, keep challenging assumptions, stay curious and lead boldly into the AI first era.