Bots at Work is the new season from Fringe Legal, which explores how AI is changing the way work gets done, with a focus on real-world impact over hype. It looks at how operators, builders, and leaders are using AI to reshape workflows, decision-making, and business models, especially in professional services like law. The show focuses on practical insights, emerging patterns, and honest conversations about what works, what doesn’t, and what comes next as intelligence becomes cheaper and more embedded in everyday work.
Ab (00:31)
Jennifer, welcome to the pod. I'm so excited to have you on. I'm looking forward to this conversation.
Jennifer Waite (00:38)
Me too.
Ab (00:38)
So you posted something, I want to say a couple of weeks ago, and it caught my attention. I'm trying to dig through your very busy LinkedIn to find it, but it was along the lines of you had collected all of these disparate skills and then AGG just went, okay, this is it. You can be the chief. So let's start there. I think you have such a...
cool and interesting origin story. You were a librarian, went into knowledge, and now you're the chief knowledge and innovation officer. So guess congratulations, first and foremost. Take me back. Give me the origin story.
Jennifer Waite (01:18)
man. Well, I honestly think most people in KM or AI really should be librarians. So like I have a real hard bias against that. like on my team, we have like three librarians, which is probably more than you need at a firm our size. So I'm very biased towards librarians, but mostly because as you know, like KM and AI, is all taxonomies. It is all organization of data. It is all governance. It's all retention. like
All of that is stuff that they teach you as a librarian. But way back in the day, I went to law school and graduated from Marquette. And then I immediately went to library school afterwards. I had a professor I really liked in law school who taught the legal research class. And I was like, this is the thing I like. I like this piece. I don't like any of this other stuff. So immediately went to library school and then started working at Marquette at the same time. So like I was a reference librarian there for a while.
And then I was the Milwaukee County Law Librarian. So every county that's big enough in Wisconsin gets its own law librarian. So would work at the courthouse and answer people's questions, all that good stuff. Is it harrowing? Yes. Because like the things that people ask you are people walking into a public courthouse, right? It's like no one's having a good day. No one enjoys their life at that moment. It's like, no, this is the worst day. And also, please help me.
Ab (02:34)
Hmm.
Jennifer Waite (02:41)
⁓ so I really, I liked that public service aspect of it. I missed it, but you know, capitalism is always calling. So, about seven years ago, yeah, seven years ago, I was looking to move into the private sector, and found a position here in Atlanta. And I really liked the folks who interviewed me for it. Obviously I was thought I was maybe going to stay in academia, but like, this was a
better position at the time. I was just a research librarian when I started here at the firm seven years ago. So long ago.
Ab (03:10)
Yeah.
Was there a, was there some sort of a, a pivot for you, I guess, mentally or otherwise? I know it's seven years ago now, but that, that made you want to go into the private sector.
Jennifer Waite (03:27)
being an academic law librarian is also a very niche field. you kind have to expand your horizons that it was necessarily like, ah, yes, I must, you know, I must go work in But I was also applying like in other, Atlanta is a much bigger legal market than Milwaukee is obviously. There are more opportunities. So my thought was maybe I'd come down work for a firm and then maybe I'd pivot back into academia. But I never left because like, I like it here. I like what I'm doing.
still here.
Ab (03:53)
Yeah. Okay.
And then AGG, it's you guys are an AmLaw firm. And I want to obviously go into that a little bit more in a second. But I also noticed that at some point you also went down the data path. So you know, you have a Berkeley degree in data science or something. can't remember the exact details. Right. I this on research. ⁓ Are you a big data geek? Do you was that just I should do this or?
Jennifer Waite (04:11)
yes.
you
Yes. No, so I wanted to make a data strategy. I wanted to have, I wanted to design a data strategy for the firm. like as a, as a person who loves to go to school, I was like, I must get a certificate for this. So it was a, yeah, it was a really fun program. I to work with other people who are already like professionals. And then at the end for the capstone, I got to actually draft the data strategy for the firm. So it was like win-win. Everybody's winning here.
Ab (04:22)
What are we talking about?
Okay.
Yeah. I need another one.
Hmm.
awesome.
Okay. Yeah, okay.
Jennifer Waite (04:51)
And that was a really good framework
and good timing.
Ab (04:55)
OK, do you think it's still something that's, obviously, you used the Capstone project for work. Do you think it had a major influence in how you run the KNI program today?
Jennifer Waite (05:09)
Major, I think it was right time, right place. Like it's already thinking about a good chunk of those things already. And so I've been reading up on data strategies. I had been reading up and kind of just data in general. And as a librarian, I kind of am never away from data. So I'm never avoiding it. But it was, it was really well timed and positioned for that. I don't think it probably did some small shifts in like what I ended up actually putting into the data strategy and then how, you know, the KM department kind of evolved into what it is now.
Ab (05:39)
Yeah. Thank you for that segue. So, we'd love to talk about what the KM department was like at that point when you joined or transitioned into the KM seat way back when and what it's like now, obviously the world around us has changed quite a lot in the last six, seven years. So that has a major impact on everything, What was the team like? What was
you know, what was your role like? And I know that role has evolved in the time I've known you and more. So yeah, we'd love to dig into that a little bit more as well.
Jennifer Waite (06:13)
Yeah, so we didn't have a KM team or department. So that's, that helped. We had a library and we had a research department. And then, when my previous supervisor retired, she had also run the conflicts department. So at that point, they kind of, put conflicts under GC, but then they quickly gave it back to me. So it's like conflicts and research services. And then they wanted litigation support and, ⁓ also.
Ab (06:16)
Yeah.
Jennifer Waite (06:39)
information governance to also report to me. I was like, this is KM at this point, like what we are doing, all these departments together, this is all data, this is all KM. So I was like, one, you're going to promote me to director at the time. And I was like, we're also going to call it KM because that's what we're doing. We're not just research like writing anymore. So.
Ab (06:41)
Mm.
Yeah. Yeah. Yeah.
Yeah.
What was the and what was the shift to go from KM to KM and I was that a obvious shift because you started owning technology.
Jennifer Waite (07:08)
Yeah, and I wouldn't say like, I wouldn't say own technology, I would say more innovation because we do have a CIO who like Dan who is our Chief Information Officer. he's a CISO Chief Information Security Officer. Important difference, he has certification. But so like we work really closely together on different technology and innovation projects, obviously, like we have to, there's no avoiding of it. Like he's going to help.
Ab (07:15)
Mm. Yeah.
Yeah.
Hmm.
Yeah.
Jennifer Waite (07:34)
with the implementation and the security side and KM and AI is going to be, innovation governance and kind of that side and more of the training. So we worked together really closely. but it has like the I portion of it really kind of came along with me because like I never, I never am not trying new things. I'm never not curious. So like I'm always.
Ab (07:58)
Yeah.
Jennifer Waite (07:59)
Looking for, and I've tried to hire a team accordingly. Like everyone on my team is very curious. They are all, they love to learn and they love to go out like librarians, you know, and then other, and now we have like beyond just librarians, have, other like information governance is works really hand in hand with the AI team as well. Cause there's, there's no AI without governance. So it's like, otherwise you're just in the wild west. So.
Ab (08:07)
Hmm.
Yeah.
Mm-hmm. Yeah. Yeah.
I
Jennifer Waite (08:27)
good to you.
Ab (08:27)
think, and you one of the things that speaking to Alex about in the past is normally in most firms, the InfoSec piece sits under the CIO, the CISO, IT, some form. It doesn't usually sit under the KM&I banner. I think, yeah.
Jennifer Waite (08:53)
governance? Yeah, we're separating governance
and security that way. So a little unique. You could do it to do it either way. But I think given our trajectory and where we wanted to go with our AI projects, it made a lot of sense for governance to innovation versus, and then it's also a good balance between the two, like security and innovation and governance all have to kind of be in a line for things to work.
Ab (09:07)
⁓
Jennifer Waite (09:19)
So having us all work together closely is very important.
Ab (09:23)
I think you make it sound very easy and natural. I don't think that's the reality of the situation because in most organizations, not even law firms, usually security governance and innovation are, I don't want to say arch nemesis, but they're definitely at odds with each other because innovation, yeah, you want to move quickly. And usually that means that, you know, some, someone has to draw the short stick.
Jennifer Waite (09:42)
or not.
Ab (09:52)
And of course, you know, in what we do, it's absolutely critical that governance security is sort of top of mind. But as you think about experimentation and doing that, you you have to be able to create safe sandboxes or guardrails to be able to do that and enable people to do that. So I just want to call that out because I know you're just making it seem like, ⁓ yeah, it great. It's natural. don't...
Jennifer Waite (10:11)
No, okay, there's obviously
a lot of not just like, we work together well now. Like now we do, as we all are kind of, we're like, we're unified, you know, behind the same strategy, we know what our goals are. And we're like, we're collectively working towards those goals. Now does that mean I'm like, hey, Dan, I want to go try this new software and just like pump the brakes. What's the security, whatever, whatever on it. And I have to go find that out.
Ab (10:19)
Yeah. Yeah.
Mmm.
Yeah.
Yeah, yeah
Jennifer Waite (10:41)
That is what happens. So it's a bit of a like, it's an ebb and flow, but like we've had a really good, close working relationship between those departments. And that's really important, but I do think having governance in like with the innovation team really helps because then you're already coming from a governance angle, like when you're going into innovation projects and that necessarily leads into security. it's like, you are going to, if something is well governed, it is necessarily going to be.
Ab (11:00)
Yes.
Mmm.
Yeah.
Jennifer Waite (11:10)
more secure. And you're going to have answered some of the questions that you would already would have been asked on like a risk assessment.
Ab (11:17)
Yeah. Yeah. And I think the other thing I was calling out things that I think maybe you guys just take for granted, but you do a really great job at, you know, you as a firm and you as a leader work really closely and well with vendors. And it feels everyone talks about, you know, we partner with people, we partner with people. Okay, fair enough. Fair enough.
Jennifer Waite (11:39)
we were closed on vendors.
Ab (11:43)
But okay, my experience is that you do work very closely involved with vendors and other vendors that I know, you guys share that you're partnering with Truth Systems, a friend of the pod with Nam and others. And same thing with Lupl it does feel like it is a really healthy relationship in that you give a lot of feedback. You expect high standards, which is reasonable.
Right. But it doesn't feel like it's a demand. It comes from a place of, we see value in this and we want to, we're willing to put in the time and effort to help improve it. And here are some suggestions of what's lacking or whatever you need to add. So it does feel like a strong relationship in that way. I know it's probably not the same across the board with every vendor, but I wanted to call that out just because I think a lot of them talk about it.
But not every firm does a good job in sort of forming that partnership with especially smaller vendors or some of the key vendors.
Jennifer Waite (12:44)
think that's important. Like if you're going to work with someone, whether it's internally or externally, you have them as people and you have to feel supported both ways. So it's like but honestly, we're we're largely not going to invest in vendors that haven't already like kind of cross that threshold with us. if we're not getting if we're not getting you know, answers back to questions that we have if there's pushback that if the energy honestly if the vibes are off,
It's a very
Ab (13:11)
yeah
Jennifer Waite (13:13)
young people thing to say, but it is very true because you can tell very early. We partner with some larger vendors, but they are also providing us excellent service and access feedback. We also partner with smaller vendors because we know what we need and we do tend to lean on our vendors pretty heavily for support because we are small but mighty.
Ab (13:16)
Yeah.
Yes.
Mm-hmm.
Jennifer Waite (13:35)
Our team is not staffed to have like, you know, an internal expert on each piece of software. That's just why.
Ab (13:41)
Right. Yeah.
What are some of the things that I guess go into your vibe check as you work with vendors? I mean, you spoke about response time. What do you, you know, if you can deconstruct that a little bit, what do you look for?
Jennifer Waite (13:55)
Yeah. So obviously response time is super important because if, if we're already talking to vendors about a thing, that means I probably have the budget for it. That means I probably already want something like that. And that means I have a problem that I'm trying to solve with that software. So that's like when I'm, when I'm entering a conversation with a vendor, it's never just a, hi, how are you? ⁓ So that's, and that's good to feel that energy back from the vendor side. It's like.
Ab (14:02)
Hmm. Hmm. Yeah.
We're exploring, yeah.
Jennifer Waite (14:24)
Okay. They understand like we're ready to move on something. So that's when responsiveness comes into play. But also it's really important to have feedback, both to give feedback and receive feedback on what is possible currently. Cause like we have some software for vendors that will not make name where certain promises get made. They'll be like, yes, absolutely. Our software can do this. It can't do that. And honestly, it feels a little misleading once we find out.
Ab (14:28)
Mmm.
Mm.
Mmm.
Jennifer Waite (14:52)
after we've been told it can do that, but it in fact cannot do that. Never has, never can't. So those are the kind of things. Yeah, exactly. It's like, we have no problem receiving that information. If it's like, especially during the exploratory phase, be like, maybe that's on a roadmap, that'd be great to know. Maybe that's not something the vendor sees for the product long-term. Maybe they're like, that is more of a this service and we're trying to be a this service. Makes sense.
Ab (14:57)
Right.
Yeah.
Mmm.
Mmm.
Yeah. Yeah.
Jennifer Waite (15:21)
But having that,
and I've seen it from, vendors who are like over promise and underperform. I much rather under promised and overperformed. Like that's the.
Ab (15:28)
Right.
Absolutely.
How open are you with vendors at that stage? Are you quite open and sharing your requirements on this is the problem that we're looking to solve? Can you do this? Or is it you obviously have a use case, you have a problem statement.
But are you, of a better word, are you testing the vendors to see if they're at discovery, if they're able to give you other ideas? And it may not be just because you want to test them to be mean or anything, but you're in the known unknown kind of situation. Yeah, I'm curious about the openness there.
Jennifer Waite (16:05)
Yeah.
Yeah, no, I'm generally, I try to be as transparent and open as possible. It's very important to me, like personally on a level, but also just like in my work life, like amongst my team, amongst everyone I work with, people will tell you that if you want an honest opinion, I will give it to you, but be prepared for the actual honest opinion, because sometimes it's not what you want to hear. But that's not, I'm not the person for what you want to hear. I'm the person for, this is how it is.
Ab (16:21)
Hmm.
Right, right.
Right. think, yeah, I think that's good. And I think generally it's appreciated as long as that expectation has been set. Right. So that makes sense.
Jennifer Waite (16:47)
I think we'll take
a second though to warm up to that because some vendors will be like, I'm like, no, this is you need to know this. Like you need to know this right now. You need to know this upfront. So like we can either know that it's going to happen or know that it's on the roadmap, know that it's not possible. Otherwise, we see each other's time.
Ab (16:55)
Yes.
Yeah. And we're to spend a lot more time talking about AI, but I'm just curious as you think about vendors now today, you know, generally in the market, the pace of change is fast, the speed of development should follow that. I still see some vendors I also won't name, know, will critique in private will praise in public. But there's vendors are still releasing updates and roadmaps every three, six months. And which to me might be okay, but generally, you know, depending on what it is,
where the standard is now shifted to month or less. ⁓ Yeah, right. I'm just curious, you know, how much does that go into your vendor selection process as well? Are you thinking about what's your roadmap? How active is this product? It may solve a perfect problem today, but are you doing horizon scanning of what could this become? And you know, what other
Jennifer Waite (17:35)
Yeah, you can do it when I'm doing it. Yeah.
Ab (17:56)
possibilities of potential there might be that this could help influence.
Jennifer Waite (18:03)
Yeah, I'll say that's something I'm not great at the horizon scanning. Like I do, I am a problem solver by nature. So I do like to see a problem and solve it. I like to do those things, but that, that means that I will necessarily. No, and I will also assume that if I think it could be done with that product, that the vendor will have thought that it could be done with that product and do it right. I'm also getting really spoiled now by some vendors for like, have feedback and they go fix it.
Ab (18:22)
Right. Yeah.
Jennifer Waite (18:29)
They'll go fix it live and they'll come back and they'll be like, Hey, here it is now. You asked for it yesterday. Here it is. I'll be like, thank you. So I think that's really, that's kind of changing my whole thought though on, on roadmaps, because it's like, if you have a roadmap, like you said, that's like far out. like, what is it about the thing you put on there 12 months from now that you can't do now given the technology that's available now?
Ab (18:30)
Hmm, right.
Right.
you go to someone else it's like wait why can't you do that yeah
Mmm.
Jennifer Waite (18:55)
Cause like I have, I literally have some software that we've paid for that I am like, I could have an intern, I could have an intern code this in a weekend and the software for us internally. So why do I need a vendor? So I think, I think it's, it's going to become much harder for vendors. Like if, if our firm is thinking about that, if I'm thinking about that, other firms are going to start that too. And they're going to, they're going to start picking off the low hanging fruit. And I can think of.
Ab (19:18)
Mmm. Mmm. ⁓
Jennifer Waite (19:24)
some, some products, some software that has just been hanging on because attorneys are used to using it. And because they feel like that's a, that's a software product that they need to use because they're this kind of attorney. I like, I like, and once you do an actual like analysis of the usage, you find out no, no one's using it. And then you're like, well, we're not maintaining this, you know, umpteen dollars worth of software.
Ab (19:32)
Hmm.
Yeah.
Mmm.
Jennifer Waite (19:50)
because you think you like it, like it doesn't offer us anything. So it's like, it's gonna get, I feel like it's gonna get harder for the vendor side pretty quickly when the client themselves could likely do the things they're asking for with just the right team.
Ab (19:51)
Right.
Mmm.
Mm.
Right.
Yeah. I do think, you know, it comes to what is your moat. And some of that is, and I will say that the counter argument to that, which is a weak counter argument, but it's a counter argument nonetheless, that if you look at the process, of course, if no one's using it, that's, that's a different story altogether. But there's some things that there's some things that seem easy to do and you can code them.
or Vibecode them, sort of 50, 60 % pretty quickly, or even 80 % for the last 20%. That's the, yeah, the last mile always gets challenging, but it depends, right? If the use case is just an internal tool, I think it's a very different conversation.
Jennifer Waite (20:40)
Instead of playing, we to enterprise.
Yeah. All of our tools that I'm looking for, like that are impact internal. So it's just the, it's the vendor feeling special about what they are.
Ab (20:52)
Yeah, right.
Yeah. I wonder whether this could be a governance piece or what we were just talking about. Have you had to say no to people internally? mean, the attorneys or someone else where they wanted something or they have something and like, no, we're not going to do that. Or you've had to reject a tool because of reasons. How does that conversation go?
Jennifer Waite (21:16)
yeah, absolutely.
Well, like some tools we have now are all like very secure and very like, we feel good about them on a risk analysis basis. Like vendors that we have like looked at for pilots and things like that, there's going to be some simple questions that they don't have. I will be hesitant if they don't have an immediate and swift answer to it. I'll be like, Hey, send me the link to your trust center. You know what I mean? It'll be like, uh, I can send you a white page. like, what do mean? It's like, send me all the documentation I might need about your.
Ab (21:23)
Mm.
Yeah.
Right.
Jennifer Waite (21:50)
about what you're doing on your side because
Ab (21:51)
Yeah.
Jennifer Waite (21:51)
like, I'm not going to review it, but Dan wants to review it. Alex wants to review it. The people who review it are going to review it. So if that's any, or if they have, if they have insecure testing on their side. So if there's something going on, like they don't red team or there's something obviously glaring, glaring missing, I don't like that.
Ab (21:55)
Right. Yeah.
Mmm.
Mmm.
Jennifer Waite (22:13)
and I would totally understand in a one-off if someone was like, I'm not sure I'll go check with the person who it. But if they're not even sure like who that person is or if they exist, that is very suspicious to me. And you, kind of see it more with like smaller and not like, and I don't mean small startups, cause I love a small startup. I mean like kind of out of nowhere.
Ab (22:17)
Sure, of course. Yeah.
Mm.
Jennifer Waite (22:36)
AI
companies and they haven't, it's not that they're not gonna go that way. Like maybe they're gonna get the security that they need, but they haven't yet. And then at that point, it's not ready for law firm consumption and it's in a different field.
Ab (22:44)
Mm-hmm. Yeah.
for us.
Fair enough. Okay, let's talk a little bit about AI. We're going to talk a lot about AI, Let's start with a different question to where initially was going to go about. Where did you expect AI to work? And it just didn't. What was the easy use case? Are you like, this has to work, people have to love this. And it kind of just died.
Jennifer Waite (23:11)
I
couldn't tell you, it's like a, I'm going to tell you and you're going to have to take it out because I'm going tell you exactly what it is.
Ab (23:16)
For everyone who didn't see or listen to any of that. Yeah, there's a, there's a good answer there, but we can't, we can't share it. Yeah. Do you have another answer, maybe a use case rather than a vendor or anything like that, that you were like, yeah.
Jennifer Waite (23:27)
yeah, actually I
do. So, and it's one that comes up again and again and again. And it is, it is not that AI is lacking. It is that the thing is not for AI. So everyone will come with like, they need automation, which isn't AI. They just need like some good old fashion power automate that's mapped onto a product and then like shipped out or their data is, is bad coming in. So it's like the things that we think AI should be good at. Here's a PDF, turn it into an Excel.
Ab (23:42)
Mmm. Racks.
Hmm.
Jennifer Waite (23:56)
Don't miss anything. That's not a good thing to ask AI to do, because it will miss stuff. That's not what its job is. For that piece, you'd want just a standalone software that's only job is to convert from one to the other and not skip any pages, for example. But I still kind of think at this point, AI should be able to do that. I should be able to give it a PDF with a bajillion pages, and it should be able to give me an Excel spreadsheet. But I will say, computer-
Ab (23:57)
Right.
Hmm Yeah, yeah
which has a bajillion
rows that someone then has to review to make sure it's correct. That's right. Yes. No, I think that's true. ⁓ Do you spend time in that case to do, I guess, a bit more user education on, this is the kind of stuff that is suited for AI and this is, it could work, but the margin for error is probably higher than your risk tolerance might be.
Jennifer Waite (24:29)
Exactly. And if I have to, I'm not gonna, so...
Yeah, and it really depends on the user because there's some users I'm fine for if they just think it's However, the magic happens. They don't really care if it's machine learning, it's AI, if it's just automation. don't Seriously. But like if I think it's a user who has curiosity and they have room to grow and they want to know, of course, because I would just be like, hey, you're not getting the output you want.
Ab (24:59)
Yeah. Yes. Yeah. Just the power automate workflow. Yeah, exactly. Yeah.
Mmm. Mmm.
Jennifer Waite (25:16)
from AI because the data you're putting in is not for AI yet. We can, we can do some other things to the data to get it to that point and then you're going to get better answers. So I think that's been a piece that we really have to, and we're continually like changing our training and changing what we're doing to educate users on just like, cause as it's getting the real, think kind of pickle now is as AI is getting so good, people are forgetting about things like a context window. People are forgetting things about like,
Ab (25:19)
Hmm. Yep.
Mmm.
Right.
Jennifer Waite (25:46)
check every single site. Like they're forgetting about these things that used to be built in because it's just so, it sounds so good and it is really good at lot of things.
Ab (25:48)
All right.
Right.
Hmm.
Yeah. What's, what's your view on this? I've been thinking about this a little bit where when we think about human work, right, so whatever a human being used to do or has done, we don't expect the work to be perfect. Maybe we do, but we shouldn't expect the work to be 100 % correct.
But when you view the work through the lens of, we use large language models to create that output. Our expectation is it must be 100 % correct. It has to be perfect. Yeah. What are your thoughts on this? Because I just find it so fascinating. It's such a ridiculous
Jennifer Waite (26:28)
to be perfect otherwise it didn't work.
Ab (26:36)
frame to have, which I do as well, by the way. So it's not that I don't disagree with it. ⁓
Jennifer Waite (26:40)
I was irritated
when it's like these to change one thing and I do it I'll iterate way longer than I should.
Ab (26:43)
Yeah, yeah.
Exactly.
Yeah, I'm just like, can just copy this and edit this one one sentence, but no fix it for me. Right. So I know that comes up a lot more. And I guess the the really complex issue is what you just alluded to, which is, think you and I play enough with AI that we know what to look forward to some degree. A lot of people don't. And you have the sycophantic nature of AI, which and you know, it's not even hallucinations. It's that
They just, well, it told me it was correct. I've checked the answers. Everything is correct. Why would it lie to me?
Jennifer Waite (27:21)
Cause it wants you to interact with it. That's its whole job. It's like, it's like being on TikTok or YouTube. Like the algorithm knows you, it knows what you want to want to see next. So it's going to, it's going to try to, and I, don't think the AI companies collectively would admit that their, their software is a bit addictive, but like it is because like there's times where I go down a rabbit hole and I'm, working 30 minutes longer than I ever meant to because I'm using AI. I'm not working less because I'm using AI, I'm working more.
Ab (27:23)
Right, yeah.
Yeah.
Right, of course not. Yeah, it is, yeah.
Yeah.
Jennifer Waite (27:51)
on the different.
Ab (27:51)
I think the
biggest lie is using AI is going to save you time. It will save you time per task, but the amount of time in total you do work increases dramatically. It feels like the analogy I have is before I had two hands. Now I feel like I have 10 pairs of hands and I'm just managing way more. Yeah, exactly. It's a spinning plate problem now.
Jennifer Waite (28:09)
I have to make tea all of that.
Yeah, I feel that very much. But going back to the percentages, and that's interesting too, because so many briefs that are filed with judges, so many judicial opinions themselves have errors. And we know that. And we accept, as you said, a certain amount of error from people. We're like, it's people. They can make errors and that's fine. I think it's probably more because as a person, maybe we feel more sympathetic towards people for making errors and less sympathetic towards
Ab (28:26)
Mmm.
Mm. Mm. Yeah.
Jennifer Waite (28:43)
artificial intelligence for making errors. Machine, it should be better than us. It should be perfect. But I think that's a false dichotomy of an argument, right? It's like, yes, it makes them like, did this get you to an 80 % of your draft? Sounds great. Why are we waiting?
Ab (28:45)
Hmm.
Yeah, yeah, yeah,
yeah. Okay, and I think that's a good segue to, I want to talk about perplexity computer. So you posted about this recently. I can see you're excited about this already. And I think that the sentiment was manage the AI teammates like a small team, pick the right jobs, focus on duplicate runs and turn patents into skills and all of that. I guess, yeah, talk more about that. What triggered that post? Was there a specific?
mess or something that you're able to talk about that you were just sort of going through. I also like the of the Oliver twist, you know, the analogy in there. So yeah, it made me laugh. It made me laugh. But yeah, curious about the origin of that. And just generally, what are you doing with the perplexity computer and how do use it?
Jennifer Waite (29:30)
Yeah. I just like that joke.
No, so we have perplexity firm
wide. So everyone at the, at the firm can use perplexity. use the pro tier license. have a small group, mostly KM and I folks, and then a few, business development folks and like a few other admin folks who have a computer license. So they have the upgraded license, small group and computer only rolled out like last month. So I feel like I'm talking like historically and it's not, it's like just from five, but in that time we've run into like a credit issue where it's like every
Ab (30:00)
Yeah, of course. Yes.
Mmm.
Jennifer Waite (30:09)
Every user gets their own bucket of credits, right? For the year. And we have to figure out a way now to manage those better because every user gets a bucket. There is an organizational bucket, but you only hit that once you've gone beyond your level. And we've talked to that with our rep at Perplexity as well. And they know, and we're kind of working on this together. How can we make this whole, we want to use computer. is wonderful. It is amazing. I like compared to Claude Cowork. That's like the only other thing I've, I've used that makes me feel that way.
Ab (30:12)
Yeah.
Okay
Mmm.
Jennifer Waite (30:40)
and it's mostly just cause even when I'm using a Perplexity or Claude or honestly Gemini or anything, it'll give me a pretty good first pass, but I've, I've worked on projects in computer where it's like, I didn't need to make any changes and like the, the not needing to make like, was as though I gave it to a very smart person who knew my topic. They gave me their final output
Ab (30:47)
Right.
Yeah.
Okay.
Mmm.
Jennifer Waite (31:04)
And honestly, then what happens when I don't need to do that first pass is then I just, it evolves like whatever the project I was working on. was like, well, maybe add this data set and consider this because I didn't have to make the amount of changes. So we're, finding a lot.
Ab (31:09)
Right. Yeah.
Yeah.
Hmm. kind
of things are you using it for? curious. I use Co-Work. I think it's generally good. I spend a lot more time in Co-Work and code, Claude code, so can automate some of the things. And a personal Mac mini, which I'm staring at, I also have like OpenClaw and things running, which is, But, know, that obviously, that, of course, doesn't touch anything sort of sensitive. It's sort of completely guarded off and all of that.
Jennifer Waite (31:33)
What are you?
Ab (31:42)
co-worker use for my day job for work. So I'm curious, what kind of use cases do you have for perplexity computer where you feel that you're getting really good first passes?
Jennifer Waite (31:51)
Yeah, so it'll be things like we're creating a lot of skills with it that we can use amongst the other computer users. So off the bat, like our BD and marketing folks made like, know, branding skills that everyone could use any output they were making was branded. Those types of things were very simple. This past month, honestly, I've been working on a large
⁓ we're migrating in financial systems. There's a lot of data cleansing that has to happen among our data. So I'm using computer to do a lot of SQL coding and I'm also using it to upload. you know, documents where I'm like, can you find like, what's a pattern? What's a, what's a trend that you see in this data? Cause that can be really helpful because when I'm doing this query and SQL, I'm seeing a very limited amount, right? And I can't, my human brain cannot capture the whole.
Ab (32:30)
Mm-hmm.
Right. Yeah.
Jennifer Waite (32:42)
spreadsheet in my head and then start doing trends and analysis, but I can't give it to computer and be like, what do you see here? Like, what are some areas? I think I'm probably concerned about these pieces within the data and be like, yep, you're right to be concerned about that. But did you think about also being concerned about these five other areas too? And I was like, no, but now I am.
Ab (32:54)
Yeah. Yeah. Yeah.
Yeah.
Yeah. Yeah. And then as you think about the credits, I'm curious because I've certainly had this instance too with Claude Cowork where I've hit my limits for the day and then sometimes the weekly limits and then I'm just like, no, how will I work now? How do I do this? Yes. Yes. Yes. Yeah. Yes. Yeah.
Jennifer Waite (33:16)
There are things I will save is honestly my answer to that is like if I know a computer can do it and I have the time to wait on it, I'll wait. I'm not gonna try to do it myself.
Ab (33:29)
I agree, but I think it does
Jennifer Waite (33:29)
And we do.
Ab (33:32)
create the instance of thinking about cost, right? Cost per task of, is this a computer task? Is this just a standard perplexity? Ask it for an answer or help refine it. So how do you think about that?
Jennifer Waite (33:36)
Yeah, now we're
Yeah, that's kind of the big filter now that we're doing amongst the group is like, we're like, the computer can do this. does it really well. I'll be like, how many credits does it cost for computer to do that? First question. It was very cheap. If it's a very cheap process, then I'm like, all right, if you've got, if you've got much better output from computer versus, you know, any other tool, great, wonderful that that helps. But then as we're building out workflows, like for practice groups, or we're building out apps that we want to launch internally.
Ab (33:48)
Mmm.
Yeah, all right.
Sure. Yeah.
Jennifer Waite (34:14)
We have to run them through an analysis of how many credits will this app to run for another user cost. And there's going to be a cost and we're, we're, we're knowledgeable at that, but it's, we're going to compare that cost of $10 of credits versus is that a human person who would have taken three hours to have to do it? You have a huge bunch of time savings on the end user side. And then also on usually it's an admin user whose time is then freed up is like, you don't have to do.
Ab (34:20)
Yeah. Right.
Right.
Mm.
Jennifer Waite (34:42)
the things and it's, we're finding so many areas where we're staffed really leanly. So we, all of us have 125, 150 % amount of work to do. So I think it's really helping with kind of that bring us down to a reasonable amount of work to have to do in a regular day. Yeah, it's just, it's very good at analyzing data. It's very good at finding trends and it can, it can make apps really quickly, really well.
Ab (35:04)
Yeah.
Hmm.
Jennifer Waite (35:12)
And it's a surprise. we're now we're coming up against a problem of now we have some apps we want to launch. Now we have to like worry about that. Now we're at that stage you're talking about where it's like, yep, now we want to launch things to more users. so like think about the credits and then we have to think about how we develop that and clean it and refine it.
Ab (35:19)
Yeah.
Yeah. Yeah.
Yeah, maintain it and all that kind of stuff as well, right? Yeah.
Last question and we'll start wrapping up. One of the things I've noticed generally across organizations in law firms and otherwise is you'll see that the more senior people, so C-Suite directors and so on tend to be some of the heaviest users of AI tools. Examples, both of us. And in a lot of...
organization, especially the bigger the organization is, the less the more junior people use AI tools. And I mean even basic, right? Like go into Perplexity or Claude or ChatGPT, whatever you use to ask a question, to refine an email. Again, I think having spent some time with some of your team members,
I think they generally all seem like they do use it a lot. That's an assumption I'm making, you will know better. But if that's true, what do you think as you think about that statement that you've done to, I guess, give them, especially if they're, everyone has 150 % workload to give them the time and the space to be able to learn how to use that. And the biggest investment is, okay, I know
Jennifer Waite (36:32)
you
Ab (36:40)
I know, I don't think anyone doubts that it's going to save them time, but it's that I know I'm going to have to spend a bit more time upfront to be able to get to that end state. How do you manage all of that with your team?
Jennifer Waite (36:52)
Yeah. So the team is again, hand selected for the natural, like they're curious. want to learn new things. So it came in and I like, exclude us from that, right? Cause like we're already doing that, but across the firm, it's been really interesting to see like where I get usage from because I would have like year or two years ago, I would have kind of expected my, my users to be kind of junior as well. be like, cause it would, it wouldn't prove them, but like I am getting
⁓ some of our most senior attorneys, like who are becoming heavy users of some of these products. And also like on the full spectrum, I think it's really important in order to get that full spectrum. Cause I agree. It's usually one way or the other in order to get that is you have to, you have to have, you have to kind of be willing to adjust your culture. If your culture is not one that allows for training and learning and growing.
you're going to leave your workers behind. Cause like we need to be deeply concerned about upscaling workers just on a base level because the work that they were doing five years ago is completely different than the work that they're doing today. And it is likely completely different than the work they're going to be doing. I don't know in six months, a year is probably going to be different. And they, they just need to adapt to that change. And I think that's the hardest part. So like we have been, we have, um, we focus a lot on change management.
Ab (38:06)
Yeah.
Jennifer Waite (38:18)
We do a lot of hands-on training, like a ton. I did a whole AI series last year. I did another AI series where I did like a 101 and we did a 201. Everyone across the board has access to a really nice AI tool. Some folks will have access to four, depending on like their role and what they need and what they have access to. So it's really important to me on an AI strategy level for...
Ab (38:37)
Right.
Jennifer Waite (38:45)
everyone to have access to it. Cause it'll be very important for them as workers as they, as they evolve. Cause it won't be, it won't just be like, Oh yeah, we know you're in depth at using Microsoft office. It'll be, it'll be assumed that you are also adept at using any AI tool. And it doesn't matter which one it is. Like if you can, it matters, but it like a little doesn't matter which one it is. Cause they're all getting so much better. So like I have some very senior attorneys who are never outside of just like co-pilot in their
And they love it. And they've been getting like better at using it. And it's been getting better is the real kicker there. And then we have some folks who are like, Oh, I want to try, want to try this new tool that I've heard about over here. I'm like, all right, well, you know, we, we'd love you to try something new and we want to, but the whole leadership and strategy has been focused on retraining upskilling and fostering a environment of curiosity, lifelong learning, just
Ab (39:15)
Mm. Mm.
Rats.
Yeah.
Jennifer Waite (39:41)
adapting to the pace of change because the change pace is never going to slow down. It's only and like it's only getting faster. So they get able to adapt with change.
Ab (39:48)
Hmm.
Amazing. Perfect. All right. I have a couple of quick fire questions. think five of them. You can give us short answers as you want or longer answers. What's one task in your job that you think will be fully automated? I used to ask two years, but I'm going to say in the next six months or 12 months, probably more likely.
Jennifer Waite (40:15)
Yeah, I was like,
okay. I feel like most of what I do will be able to be automated. And I think I, I think I will just be monitoring the automation of like most of the work that I'm doing. Like a lot of this. Yeah. I think that's how most, most roles are going to evolve into.
Ab (40:32)
the manager of agents.
Jennifer Waite (40:38)
I don't think we're going to see any reduction in headcount. I think we're probably honestly going to hire more folks because we are able to do so much more work. And there's just this massive market that has been historically underserved in the legal field that now we could potentially afford to serve more people, both bigger and smaller, that it wouldn't make sense to afford before because we can do different models.
Ab (41:03)
Okay, what's one AI tool that you use every week, every day that you'd miss if it disappeared?
Jennifer Waite (41:09)
Perplexity computer.
Ab (41:11)
Yeah, okay. Hopefully, of us gets sponsored by Perplexity after this. Exactly. All right. What's the biggest mistake you see teams making, any teams, when it comes to AI right now? It be adoption, could be anything else.
Jennifer Waite (41:15)
Give me some credits for this question.
It's the burying the head in the sand. Cause I don't think, I think any other way you slice it, like no matter how you tackle the AI problem, you're probably going to get some benefit out of it. It's the complete avoidance of the problem entirely. I think that is the worst thing for you. Or at this point kind of being like, well, I'm going to wait until the tools are so good that then I'll be like, if you waited that long, one, you don't have the necessary framework or skills to then use the tools that are available. But then
you've lost all that time where everyone else is now at that level.
Ab (42:02)
Now, I cannot agree with you more. The amount of people I still speak to who are just waiting and just like, look, even if you don't think the tools are good enough, you know it takes a while for people to learn things. The skills takes time. So learn the skills and as the tools get better, you'll get more out of it. yeah. Speaking of skills, what skills do you think legal professionals need to stay relevant over the next amount of time in the future?
Jennifer Waite (42:26)
It sounds trite and it's been said a lot, but I do agree. It's the human skills. It's the interpersonal skills, the emotional intelligence, the kind of, and again, that curiosity, that desire to learn something new and not be afraid of new things. think that'll be, you'll see teams doing really well that are full of those kind of people, people who are willing to take time and learn something new and to try and fail at things.
Ab (42:55)
Okay, and final question. If you were starting from scratch today, we'd just pick you up and drop you into a net new firm. Where would you focus when it comes to AI within a law firm? Where would you start? They have nothing. They have no resources, nothing.
Jennifer Waite (43:09)
I would refuse to go by myself. I would require that I take at least my entire team with me. So we were dropped over there in a new area.
Ab (43:12)
Sure.
Right.
Okay, well, let's
rewind. Let's do it this way. This is a better scenario. We take you and your current team with all of the skills and things that you've learned, and we just drop you in the past. you kind of, but it's the start, right? It's the start again, but you know, all that's possible in the future. Where do you start? How do you build a foundation?
Jennifer Waite (43:41)
Maybe honestly, it's only been like three years, but it'd be, it'd be rough to start from pure scratch at this time. I think talking to people and kind of seeing what their pain points are is a really good place to start anything new, especially when it comes to technology. It technology and AI are both really effective when you can be like, here is this that solves the pain point you were just telling me about. And I can show you how it works. It's not difficult to use. You can learn it. I think having like one.
little wedge into like a user's day and like solving one problem that opens the, then they'll be like, well, what else can you help me with? Cause like, we have so many users who come back, they're like, they'll email the AI team with a problem. We'll give them, we'll help them with a solution to the problem. And then they'll be like, ⁓ well guess what? I have five more problems that you could probably help with. So I think kind of starting small and then just biting off chunks of the elephant is the best way to go.
Ab (44:36)
Amazing. Well, thank you so much for coming on. This was so much fun and looking forward to continuing the conversation in the future.
Jennifer Waite (44:43)
It's been great.