Engineering Enablement by Abi Noda

On this week's episode, Abi interviews Kent Wills, Director of Engineering Effectiveness at Yelp.  He shares insights into the evolution of their developer productivity efforts over the past decade. From tackling challenges with their monolithic architecture to scaling productivity initiatives for over 1,300 developers. Kent also touches on his experience in building a business case for developer productivity.

Discussion points:
  • (1:42) Forming the developer productivity team
  • (3:25) Naming the team engineering effectiveness
  • (4:30) Getting leadership buy-in for focusing on this work
  • (7:54) Managing code ownership in Yelp’s monolith
  • (12:23) Supporting the design system
  • (16:00) The business case for forming a dedicated team 
  • (19:45) How to standardize 
  • (23:50) How their approach to standardization might be different in another company
  • (27:08) Demonstrating the value of their work 
  • (32:21) Building an insights platform
  • (38:47) How Yelp is using LLM’s
Mentions and Links

Creators & Guests

Host
Abi Noda
Abi is the founder and CEO of DX (getdx.com), which helps engineering leaders measure and improve developer experience. Abi formerly founded Pull Panda, which was acquired by GitHub.

What is Engineering Enablement by Abi Noda?

This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.

Abi Noda: Kent, thanks so much for coming on the show today. I'm really excited to chat with you.
Kent Wills: Thanks for having me, Abi.

Abi Noda: I first heard about you through a talk you gave recently that was being shared around, and what really caught my attention was just how long of a journey you've been on. From what I understand, almost 10 years from 200 to over 1,200 engineers [Editors note: Yelp’s latest headcount stats show that they have over 1,300 developers], immense growth at Yelp. And so, I wanted to start at the very beginning because one of the most common questions I get from folks who are trying to tackle productivity at their companies is how do I get started? So, I'd love to hear the story of how developer productivity or EE, whatever it was called, began at Yelp.

Kent Wills: Right. Sure. So, one thing to call out here is that I joined Yelp as an intern, so 10 years at Yelp intern all the way to leading the organization, but I wasn't making those decisions in the beginning. It was my predecessor that was at Yelp. And he was really drawn to this article from Twitter, and the statement that we really focused on was, "Let a 1,000 flowers bloom, and then rip out 999 of them."
And that was a blog post written by Peter Seibel, and he was looking at how at Twitter they would continue to scale the organization in an efficient way. And there were... I think Twitter had a little bit more of a liberal policy of the languages that you could use and the tools that you could use.
And so, there was this aspect of being able to scale, but needing to potentially be standardized to do that in a lean way. And so, that was really how we started to think about engineering effectiveness at Yelp, given a group that had thought about it before for a company at scale, even though we were much smaller than Twitter at that time.
Abi Noda: I know Peter, he was on this show previously, and his group was also called Engineering Effectiveness.

Kent Wills: Yeah.

Abi Noda: In the most recent talk you gave, the title talked about developer productivity. So, I'm curious the name evolution from an EE to developer productivity.

Kent Wills: Yeah, I think there wasn't really a need for us to reinvent the name back in the day. But today, everybody is calling their teams the developer productivity group or whatever. Yeah, I would say the developer productivity group.
So, when I talk about Engineering Effectiveness, sometimes people don't know what I'm talking about. And essentially, they're pretty similar with some caveats because each company takes a different flavor of this depending on what's important to that company. And I also noticed that some companies take a more info focused or more platform focused for their Engineering Effectiveness teams. And over time, our team, because we were in the consumer org, ended up being a little bit more application focused.
Abi Noda: And what was the first thing that you all did? One of the things I tell people who are trying to get started is to look for a quick win or look for that burning problem. What was that first thing that was the catalyst for the group even forming?

Kent Wills: So, Ken Struys, who was my manager, was working on the consumer team with my now boss, Wing, who's the Head of Consumer, VP of Consumer today. And Ken went up to his boss at that time and said, "Hey, after a while, we're not going to be able to ship to production if we don't take care of some of these problems."
And so, we didn't have any dedicated time to start thinking about how we're going to solve some of these long-term problems that were coming up. And so, Ken put together this document to pitch to Wing and the rest of the organization and say, "Hey, can we start spending some time here? Can we get a couple of developers to start focusing on solving this problem? And oh, by the way, I think we have a solution to solve this problem, but we just need the dedicated time and effort in the space."
Abi Noda: What's your advice to people one step before where you were able to get? They see a problem in the organization. A lot of the people I talk to sometimes express a sense of hopelessness or pessimism that they can get that leadership buy-in. Do you have any advice on how to do that effectively?

Kent Wills: This is a pretty tricky one. And I think it depends also on the scale of your organization. So, maybe if we tried to pitch the same problem when we were at a hundred engineers, maybe we wouldn't have been able to influence an outcome there. If we were before product market fit, then maybe it wouldn't have made a lot of sense to create a developer productivity group.
And so, we really just were scaling until we hit a point to having a problem in the organization. And I think what helped was that there was already an understanding that there were problems in this space. So, we weren't saying that there's this fictitious problem that's coming up in the future. We said, "Hey, right now, we are currently having problems shipping to production." And with this trend, we're going to continue to get worse and worse over time.
And so, really it was just around what the company tolerance and the leaders' tolerance was in the organization for actually dealing with this problem. And so, I mean, this goes back to influencing in an organization that could probably be a whole another discussion, but how do you help other leaders understand that solving this problem or even just you understanding how this problem helps other leaders get what they need in the organization? In this case, it was directly tied to this leader's success. If we can't push to production, then that's going to be a problem for the company.

Abi Noda: I like that advice of thinking of it through the lens of how does this help other leaders get what they want. I think that's good advice. Now, I know that at this moment there were a lot of different challenges, interrelated challenges. One thing you talked about in the talk that I was curious to double click into was the ownership problem in your monolith.
And I understand eventually, you moved towards breaking it up into separate services, but at least at that time you were really thinking about how could you improve separation of code and ownership through other means. So, I'd love for you to share what were the different possible solutions you were thinking about?

Kent Wills: Yeah, so this had a bunch of different phases. In the monolith, we had a lot of code. And what we relied on in the beginning was just using get blame and understanding who actually wrote that line of code. And that works pretty well for a small organization or an organization where you don't have turnover, but what happens when that engineer leaves the company? Have you started to create process and structure around supporting, continuing to develop in the code base when things are constantly in flux?
And so, it really starts to become a scaling problem after a while. And I think that was a key moment for us where as we would look at who wrote the code, as we'd try to go get help all developers in the organization, we realized that we needed to come up with a better solution. And we started to take more of a team-based approach because teams last a lot longer than individuals do, and they have concrete ownership.
And so, when we were thinking about this, okay, well, now how do we get team-based ownership? We have to add metadata to the files, the tests to the repositories. And you mentioned that when we went to services, maybe you already... You naturally get some of this compartmentalization. And so, maybe the problem is a little bit easier there, but you still have the problem of people leaving out and still maybe there's a service where many different teams are operating, and many different people are operating in.
So, we basically just iterated and iterated on capturing different levels of granularity in the monolith. And this is before we moved to services, we would maybe say, this folder is owned by this team. We would maybe have a decorator that says, this test is owned by a specific person. And so, that was challenging and tedious.
And it also was a reason why we tried to push people out to services because we were able to start solving that problem a lot quicker because then we would just say, "Okay, this team owns this thing and this service, let's put a file there." And then, we would just port that code out bit by bit and solve that problem over time.

Abi Noda: That's interesting. Yeah, the challenge of how you draw boundaries and lines within a monolithic code base is you're describing, it's almost just easier to split it up into services.

Kent Wills: Yeah, and this was today we have a lot of clever tools like Learn-Hour. You can get functional separation of your code. And our monolith, we had just a rule that said, "Hey, if you're writing code for the consumer app, go write it over here. If you're writing for the biz app, go write it over here. Well, then guess what happens? You go, "Oh, well, I want to use this code that was written on the consumer app to go over on the biz app. Well, let's create a common." Guess what happens with common? Everybody starts writing generic logic in common and you start to lose that functional separation.
And then, sometimes people don't even follow the rules. I'm writing biz code over on consumer land because there was no... We were relying on people to uphold that standard. And as the organization scales, that standard starts to flap a little bit. And so, that actually turned into one of our key lessons, I think for a lot of what we do in Engineering Effectiveness, which is if it isn't enforced in some way, then it will likely degrade over time.
Abi Noda: That's really interesting. As you were starting to migrate into separate services, I know one of the projects you worked on was the design system. And you had mentioned to me that originally, the design system wasn't a team, it was just a thing. And in fact, you had contributed to it as a developer. Yeah, share with listeners the evolution from the design system just being a thing to becoming a team.

Kent Wills: Yeah. So, we didn't even call it the design system back in the day. We called it the style guide. And it really came out of Molly, who was one of my managers and Ken, who were trying to move code out of the monolith. And they started with an easy page first, maybe like the careers page for Yelp. Something that doesn't have a lot of logic. You don't have to hit a big database. But there's a lot of styling, right? You have button styles, you have maybe a header, you have a footer, just common components that you might want to use on different pages.
And I think Ken was telling me the story where they ported out this page and they were getting ready to create another page and he asked Molly like, "Okay, well, I want to use all the styles from this page on this new page." And I think they looked at this and they said, "Okay. Actually, all these styles are name spaced, like careers, whatever."
And so, it's actually not super straightforward to port this over because the first thing that we were focused on was just getting code out of the monolith and proving out that this actually worked. And then, once we knew it worked, it was like, "Okay, so how do we start scaling this out?" And so, that's where Molly started to take this idea of a style guide where as you move out of the monolith and you do want to use some of these common styles, common components, how do we do that in a reasonable way?
And so, we created a library around this. And so, we went from one front end service that was a collection of pages to 40 front end services. And we needed to manage that across teams. So, at first, it started to be just a core competency. So, we had an enabling team called the core web team. We called it, web core. And that team was helping assist to move code out the door.
We would always take on a really tough problem. We'd figure it out for the rest of the organization. And then, we'd start to create steps for other teams to follow suit, so they could move their pages out, or we might even consult with them to say, "Hey, you know what? Your page is a little bit too complicated. If you try to do this now, you're probably going to get stuck."
And a part of that solution was having a design system that others could just pick those styles out or use that library to get the styles that they need on the page. And so, we might actually even do an audit on the page that they want to bring out and say, "Hey, we actually don't have all the components or all the styles that you're actually going to need to do this."
So, we didn't even think about this as a team. We didn't have a bunch of developers that could facilitate this. We didn't have a problem just yet, but we knew as we started to go out and have more and more services, then it could very quickly become a problem. And as a lean team, we weren't going to be able to support this.

Abi Noda: This is similar to my question earlier about, tell me about the process for making that business case, the proposal to actually get the funding to turn this into a team. How did you do that?

Kent Wills: Yeah, so we operated for probably a good four years in this space where we didn't actually have a team for this because it was a split responsibility. And in fact, we only did this on the web to start. We didn't actually do this for iOS and Android. But we did have the core iOS team that we created later, the core Android team that we created later.
So, instead of trying to get funding for a completely new team, what we did was we started to split the responsibility across teams that we already had, the core web team, the core iOS team, and the core Android team. So, that was part of their planning. Every year, they would devote some time to supporting the design system.
After a while, you started to see that that chunk on the roadmap started to grow. And so, that was a way that we could start to say, "Hey, if we had dedicated people to work on this, then these are all of the things that we can do." And in fact, even when we got to that point still, it was like, "Oh, well, you have teams that are already doing this." Okay, so then, it was like, "Well, this is what we're not doing on those enabling teams for the web environment. This is what we're not doing for core iOS because we have to devote this chunk." And still, we did not get funding for the team.
And I was lucky enough to have a really motivated engineer. So, Molly ended up leaving out, and we had an intern that had come in, worked really extensively on the design systems team, knew all the ins and outs, was really pushing for this to be a dedicated focus. And what was great about Theresa is that she didn't really care if you were the chief product officer or the CTO of the company. She would just go over and say, "Okay, we need to do this." To a point where I think, at one point, she dropped a stack of papers on the CPO's desk saying like, "This is why you should invest in the design system."
And I think what it really took was for the CPO at that time to just say, "Hey, we actually want to refresh our styles across Yelp." And so, how do we do that? And we said, "Hey, we got just the thing. We have all of the infrastructure that's been built up over time to make this actually something that's easy and doable, so why don't we use this as a way of starting to create a dedicated team and focus, and let's pair with these consumer teams to actually deliver on the product need," which was refreshing styles across the site.
So, it took many years to get to a place where we were able to pitch for another team. And in fact, I don't know if we needed a team in the beginning because we were able to flexibly work with the teams that we already had.

Abi Noda: That's such a funny story because it just shows that the challenge we were talking about earlier of getting buy-in, I mean, no matter how strong your conviction was and how persistently you tried to get buy-in, it wasn't until there happened to be a shiny other initiative that this could support that you were finally able to get that buy-in. So, that's a really funny story, and one definitely I think a lot of listeners will relate to.
One of the things you mentioned in the talk that I found interesting was this point of view around that standardization is a requirement or prerequisite to be able to drive Engineering Effectiveness and do the type of work that you do. You touched on this earlier, but I would love to hear what informed that point of view and how you arrived at it.
Kent Wills: Yeah, so we started with this point of view because we were largely influenced by Twitter and the blog post. And the idea of the group was to be able to run a lean group, so that we didn't actually have to hire more for this type of function. We didn't want developers and consumer and biz, and maybe our ads group all doing the same function. We wanted to consolidate that function. And the only way to really do that effectively was to have some level of standardization.

Abi Noda: One of the most common frustrations and challenges I hear from leaders in your role is how to navigate, how to actually standardize, how to find that right balance between standardization and freedom and autonomy. And oftentimes, I think there's even a little bit of an adversarial tension happening within an organization where developers feel like they're being forced and the platform team really has the best of intentions. And so, I'm curious about your perspectives on that and how you think about navigating that challenge.

Kent Wills: It's pretty tricky. And the way that I think we started off on the right foot is that, we had developers from product teams that intuitively knew the problems that were there because they were dealing with those problems firsthand. Now, you have to be careful though, because essentially what we're doing is we're setting a bunch of opinions that we want to move forward with and that we want to support.
And so, there is this back and forth that we have to have. And also, there's probably some flexibility that we need to be able to have. So, what we ended up doing was saying, "This is what we support as a group. These are our opinions in the space. This is why we've gone down this path, and it is open for discussion. And so, if we are not doing this in the best way, then we're willing to make the change."
Now, the tricky part though is that some of these discussions turn into more philosophical debates. You always hear about the developers arguing about tabs versus spaces, whatever it might be. But in some cases, we just have to make a decision and say, "Look, this is how we're going to operate as a company, or this is what we will support as a developer group on the Engineering Effectiveness side."
You can go off and do the thing that you want to go do, but just keep in mind if you do that, it's like forking a repository. You're now fully responsible for everything that comes with supporting yourself. And we don't really want people to go in that direction, but we're not going to sit there and stop them if they actually have a business need to move forward in that way.
And in fact, that might later evolve to new support that we would have in the future, but that would have to come with some standardization, centralization. So, maybe it's not standardized in the beginning. But if you actually want support for this, if we want to do this in an efficient way, then we probably need to look at what needs to change, so that we can support this more broadly as a group.

Abi Noda: This is a bit of a hypothetical question, but we're talking here about how centralization is just a prerequisite to being able to tackle something in a centralized approach. There are organizations out there with a ton of sprawl, many thousands of engineers. Again, this is a hypothetical question, but what are different manifestations of your function that you can envision at those types of organizations? I mean, do you think at some organizations there does need to be several centralized functions? Or some organizations do think standardization is maybe not the right path? I'm curious your take.

Kent Wills: Everything is a tradeoff. And so, if we're saying that we want to be more standardized, then we probably also need to make sure that we're continuing to be creative with the solutions that we have. And that's sometimes tricky because we're saying just do it in this one way, but then there's many different ways to solve a problem. And so, what's the best way? And oh, your problem is maybe different than another person's problem that we're trying to support.
And so, there's this constant rebalancing of what we are supporting as a standard across Yelp. And for some things, we tend to be more opinionated on. So, we did as an organization say, "You know what? There's only a certain set of languages that we're going to support." And if you try to grab that new language that you want to work with, say, go, because it's super performant. Well, you're actually going to need to get that approval from the CTO really to use that because that on the long-term is going to become a maintenance burden. And if you don't have go experts in your company, then what happens when those engineers leave? That's actually a risk for the company.
So, there are aspects of this where we just say, "Hey, the languages that we use might be Java, Python, and JavaScript-Land. We might use React as a framework. So, we pick some of the spaces that we want to operate with. Now, from there, there's a little bit more flexibility, but you get a lot of this stuff for free if you start, say you're developing in React Land, the core web team is going to support your web pack build for you.
So, yeah, maybe you could use another bundler. But why would you do that when you just get this for free? And really, why do you want to use another bundler? There's usually an underlying problem that's there. So, maybe the build is slow, great, tell us that the build is slow, and then we'll figure out how to solve that problem. You don't always need to change the technology to solve problems. And in fact, that's something that I think we also try to balance is we try not to change too often because that results in migrations and friction for your consumers.

Abi Noda: It makes a lot of sense. I want to shift gears a little bit and talk about the... Let's call it the latter half of your journey has reached 400 engineers and beyond. Share a lot of things change, I know your organization grew, stayed about the same percentage from what I understand of relative to the rest of the organization, around 10%.
One of the challenges I heard you bring up that I thought was really interesting was how at the beginning, things are a little bit more bottoms up. And as your organization grew and matured, it felt a little more top down. You talked about how the beginning or at this phase, all the developers were happy, but then executives came and started saying, "Hey, what's the value of what you're delivering?" So, share with listeners how you navigated that.

Kent Wills: There were a couple of phases here. So, when we started, we were solving a problem, it was to get out of the monolith. Once we actually started to figure out that problem, and we started to have more funding because we figured out that problem, it was like, "Okay, well what do we work on?" And so, the way we did that was, we collaborated with our product counterparts to understand where they had pain points in their flows.
In fact, we're essentially running a product team inside the company. We're doing user research with these developers. It's not stat sig, it's a smaller group of people. So, when you say bottoms up, that's how we approach the situation. We said the developers that we support for our enabling teams say it was core iOS, core Android, core Android reaches out to the Android engineers at the company, "Where are your bottlenecks? Where are you having problems?"
And so, we built up a lot of trust with the developer base by saying, "Hey, we're going to solve these problems for you, and also we're going to listen to you as we're talking about the solutions that we have in mind and understand whether or not that solution works for you. Are we getting stuck on a philosophical debate on what technology we want to use? Or are we just figuring out a solution that makes sense for solving that problem right now?"
And then, you get to a phase where the developers are super happy with everything that you're doing, but then if you're not continuing to tie what developers are doing to what is going out the door, what is shipping and product planned, you start to have execs go like, "Oh, well, this is actually a pretty big group now." It's like 30, 40 people, and developers are saying they get a lot of value from this, but how much value are we getting from a group that's this large?
And we started in the beginning, and this is what Ken would do a little bit, my previous boss in the space, he would look at the size of how many Android developers are there? How many iOS developers are there? How many web developers? And then, just like you mentioned, thinking of it as a percentage of the organization, this is how many people we need to hire for this.
But that gets to a point where you still suffer from not sharing the value. It's almost like you've already gotten buy-in for the org existing and you've already gotten buy-in that you should be relative like 10% of the org, or 10% of the people that you support. But think about the conversations that Ken's manager has to have to the head of engineering as like, "Okay, well, I want to hire two more people, three more people, four more people." At some point, it doesn't cut it to just say, I think this group should be 10% of the organization.
And so, this started to creep up and I think was the impetus for starting to have a metrics platform to be able to measure the value that we were creating in the group. And that started before I took over EE. But then when I took over EE, I had to really think about how we were mapping what we were doing on the ground with what was shipping out into production.
And so, while we did have some of the metrics, some metrics we were trying to create at that time, this is before DORA was super popular, and we were just trying to capture a lot of our best practices. We had to reframe what we were doing. So, we took consumer products that were shipping and we had the work that we were doing, and we showed how the work that we were doing facilitated shipping those things out faster.
And so, our teams that we supported were our biggest advocates, and they had a lot of visibility with the executive group. And so, we actually started to get a lot of our buy-in for growing the group even more at that point. And not just one head here, one head there to the point where I was like, "Oh, I see like when I invest more in this group, then I see how this directly relates to the products getting out the door faster."

Abi Noda: You touch on how this problem of justifying and conveying impact was at least part of the impetus for the metrics work and platform you all built, which I'd love to talk about. First of all, I asked you this earlier, but what were some of the inspirations for your insights platform? As I saw the screenshots you shared a bit, it made me think of some of the work LinkedIn has done or Spotify has done. But yeah, curious what that early part of the conception of that project looked like for you?

Kent Wills: Yeah, this is probably something that has been built in many large organizations and we're no different. And today, there's a lot of tools that are available to do this, but back when we were thinking about this, we didn't have those tools. And in fact, we weren't thinking of it in terms of what metrics can we track here? We were thinking of it in terms of the engineering best practices that we're trying to encourage across the organization.
So, we tried to codify those engineering best practices, and it might've started with some basic things that resulted from the monolith. So, the monolith, we had to do a lot of distributed testing. When we tested a lot of tests or when we ran a lot of tests, some of those tests would be flaky. So, what do you naturally do? You start naturally collecting analytics around flaky tests. And okay, if you're collecting that data, how do you surface that data effectively to the teams that care about that data?
And so, we did that a little bit in what we'd call a test results UI. So, there was a whole separate platform for that, but then we would have maybe another problem that props up maintaining dependencies across all of these services. We had a tool. We called it bumpy back in the day, which would bump dependencies. And webCoRE was working on this, so specifically for web.
And we said, "Oh, we got a lot of data around how far behind people are with their dependencies. We actually, have now data with potentially vulnerabilities that we have in our code base." And so, how do we surface that? We could have everybody go to all of these different tools. But then, okay, that's probably not super efficient. I mean, it was nice to be in the developer workflow, so developers would naturally interact with these tools.
But as a manager for the team, as a director for the group, as a VP of an org, how do you understand what this looks like for the broader group? And so, that's where we started to create this new platform, this eMetrics platform where we could start to consolidate all of this data across all these different applications that we had. It ended up being the hub for all of these tools that we were creating.
So, we weren't even thinking about DORA metrics or anything else at that time. We were just thinking about how do we get some of these best practices and how do we get some of this observability to the teams, so that they could actually start triaging this in a way that is deliberate. A lot of product teams will have dedicated time to fix bugs. Wouldn't it be great if you had a dashboard where they could just knock off items on the list and say, "Look, okay, I fixed things." And by doing this, it's actually going to reduce risk in the long-term for my team when shipping out into production.

Abi Noda: And how did you actually build it, a little bit of, I guess, the technical side? I imagine you were pulling data out across your developer tooling, putting them in some kind of data warehouse. So, then how would you kind of tailor this, so each team? And perhaps, had a different way of working, a different way of considering what's a bug? How did this actually work? What's an error might be different?

Kent Wills: This is another one of those stories where we didn't have a team dedicated to doing this. So, we had the release engineering team that was in the group, and Ken was the manager at that time. And so, he didn't have funding to be able to create analytics team.
So, we looked at the teams, and one of the nice things about the release engineering team is it was pretty common across platforms. If we went to the core web team, then it would be focused on web. But with the release engineering team, it was almost like down the stack a little bit more managing Jenkins infrastructure, pipelines and stuff like that.
So, the development started with intuition around incorporating metrics collection in our standardized systems, and we wanted to do it in a way that was super easy. So, they might create a script that will emit metrics in during the build, something that you could put in Jenkins, something.
So, it was something that we could easily allow developers to just emit these metrics to a centralized source. And in the beginning, it was the release engineering team that was deciding what would be emitted. So, we didn't really have confusion in the beginning around, I mean, we were still trying to figure out what we wanted to admit and what we wanted to track. But we didn't have disagreements of opinion in that space because it was being collected by one team and wasn't necessarily being shared out to other teams at that point.
And that's a whole another topic of conversation, but starting in that way lent to maybe less adoption in the beginning and less buy-in the beginning for using a platform like that. But it did start with an opinionated view of the world on what we would collect and how we would report. And in fact, in the beginning, I think it just served as a effective tool for communicating to executives some trends that we would be able to see over time without having to spend an inordinate amount of time trying to recreate history across all of our build systems and all of our logs.
Abi Noda: That makes sense. I want to conclude by asking you about how you guys are thinking about and/or using LLMs, that was something you had mentioned in your talk. So, yeah, whatever you can share. How are you guys thinking about it? What are you guy's planning? What are you guys doing?

Kent Wills: Yeah, this is the topic of conversation nowadays, I guess. How are you incorporating LLMs into your workflows? And I think it could be a really powerful tool, and I think that's something that we're continuing to investigate, really need to have a tool that starts to deal with some of the fuzzy aspects of dealing with content. And that means, that it might be a little bit more flexible when you're using these types of tools, but it also means that it's not as deterministic.
So, we're evaluating using LLMs in a couple of different ways. One is thinking about how we can help support our on point. So, again, we always... We're not starting from, "Hey, LLMs are cool, let's just use them." We're starting from pain points that we have, which are for my support teams, how do we manage giving support to a wide developer group when we are still a lean team without impacting what we're actually delivering through the year?
In the past, we created a wiki page. And in fact, one engineer created this wiki page called Why/Ask. Why is an internal linking system that we have at Yelp like, just think of it, it's a short link. And they created this because people would come into the channel and they would ask to ask to Ask, "Hey, can I ask you a question?" They wouldn't provide a stack trace for the problem they were having. They wouldn't be able to figure out the minimal reproducible answer. Maybe they didn't look at the documentation.
So, there was a whole class of support where maybe actually the person didn't need to come in and ask for support. Maybe they actually could have been empowered to solve that problem themselves, but we still had to be in the loop because we were there answering the question. So, then the question is, okay, well, can an LLM with knowledge of our documentation, current and past questions that were asked in the Slack channel, current errors in production, better or more quickly, give an answer to that person and point them in the right direction without creating more of a bandwidth burden for the support team.
And so, that's the goal. But it's tricky because that has to be high accuracy and that has to be dependable. And so, I think that's where we're kind of iterating like, "Hey, is this going to be dependable enough that it actually saves time? Or is this going to be frustrating for developers? Kind of like when you go and call any major company support line and you get, "Please select one, two, or three to move to the customer service representative."
We don't want that environment to happen with the developers that we support. We want them to still depend on us. So, how do we do that in an effective way where they can get the answer? And then, also identify if we need to jump in where it's a really complex space. That's one-use case. Another one is migrations.
When I think this is something that's still a little bit tricky and still... I think we're trying to figure out how to do this as an industry. Really, what we want to be able to do in the long-term is be able to say, "Hey, all of this support that you have to do, all of this maintenance that you have to do to keep your services up-to-date, why do you actually need to go in and make those changes when we actually already know what changes have been done on previous services. We already know going from this minor version to this next version is actually just a set of steps.
So, we automate a lot of this today in a deterministic way. Can we actually have an assistant that helps us do these migrations in a more hands-off way, in a less curated way that's still highly accurate? And that's tricky. And that is not as dependable with LLMs, especially if they hallucinate. And that might actually create us to take more time away from developers.
But again, it's a new tech that's out there that can potentially help us rethink how to solve some of these unsolvable problems. And I think it's a good exercise every time the technical landscape changes, going back to the problem set that you have and ask yourself, is this really hard problem actually much easier now because of the technology that's out there?

Abi Noda: Hearing these ideas and how you're thoughtfully thinking about deploying LLMs. As you mentioned, this is the topic of conversation these days and something I get asked a lot about from folks in your position. So, this advice is really useful.
Kent, this has been a really fantastic conversation. I loved hearing about the journey and listening to your talk. I would highly recommend that to listeners of the show. They're interested in learning more about your journey. Thanks so much for your time today and coming on the show.
Kent Wills: Thanks so much. This is something that is super personal to me because it's been the last 10 years of my life. So, thank you for letting me share my story.