This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.
Abi Noda: Gilad and Amy, really grateful for your time. Thanks for coming on the show today.
Gilad Turbahn: For sure. Happy to be here.
Amy Yuan: Thank you for having us.
Abi Noda: So, you both lead a new team that has been driving forward a major developer productivity initiative at Snowflake for the past eight months. As we've talked about your journey, I understand there were teams at Snowflake working on developer productivity for years, but then about eight months ago, your leadership team elevated developer productivity to this P0 and brought in some new leaders into helping lead this. So I want to start by asking, could you share the story behind that transition, how developer productivity was being tackled before, what led to it being elevated by leadership as a P0?
Gilad Turbahn: I think there's a couple of things. First of all, there was a lot of groundwork that was laid down by the team beforehand, and a lot of the things we're able to do today is stuff that we're able to do because of that groundwork that was laid before. I think the set of two major changes that happened at that time include on the one hand having a lot more rigor around customer impact and not just being able to put out whatever we committed to putting out. And then secondly, focusing on, call it, short-term impacting things and not just long-term deliverables. Not take the eye off the ball of the long-term stuff, but still make sure that we care about the short-term impact. To give a very simple example, we took on four different migrations. All of them were very big. It was the transition from different build systems to Bazel. It was the transition from our old dev VMs to cloud workspaces. It was a new testing framework called Snowtrail, and it was our new CI framework.
When you take on all four at the same time, the team could seem like it's going dark for a bunch of time working on this new stuff. And it's a perception thing, nobody has gone dark, but it could seem that way. And therefore, one of the main things we wanted to do is to get those quick wins with everyone so they can see, yes, things are improving, we're listening, and we're acting on what you're saying and not just waiting until these big systems would happen.
Amy Yuan: Yeah, I would say another thing in the last eight months was making this a P0 help the entire company see this is a company-wide initiative. That's how we were able to not just mobilize our own team, but also get volunteers across the company to staff some of the initiatives that traditionally is not in the dev-plus space, but they're very fundamental, instrumental for us to be able to move the needle. And then for us to have that very tight feedback loop, like what Gilad said, to make sure our customers see the short-term wins to have confidence in the long-term migration. Because a lot of the migration were years in the making, but then the last three months is when we finally starting to see a lot of the effort to start to bear fruit.
Abi Noda: I get asked by a lot of leaders who are in similar roles as you, they ask, "How do we get executive leadership to care about developer productivity?" I don't necessarily expect you all to have the silver bullet answer to that, but I am curious, how does your ELT think about developer productivity? How do they talk about it?
Gilad Turbahn: Amy, do you want to take it first?
Amy Yuan: I think Snowflake is a very unique in the way that our founders still code. Snowflake was founded by two architects from Oracle 12 years ago. To this day, they still operate sometimes as I see architects. And then for every architect that Snowflake has hired, no matter what level, and including our CTO, they all write code. The reason is they don't live in ivory tower. They understand the ground reality. They understand how long things will take. They understand what is feasible. They also understand how hard or easy it is to check in code.
For example, our co-founder recently told us last year when things were hard he stopped writing code. It made him sad. And then because all the improvements, now he can write code again and he's happy. This is why eight months ago our CTO said making developer productivity to be frictionless is his P0 project. He personally spent more than 50% of his time defining the work streams and then getting people excited and then giving guidance. That's how much support we have from the founders and CTO. But it's not just them, it's the same case with our CEO and then CFO, and then chief product officer as well. They all see this as fundamental to Snowflake's success.
Abi Noda: Amy, that's really interesting because that's a pattern I've seen at other companies too. For example, Dropbox is an organization I know is very heavily focused on engineering efficiency and productivity. As you probably know, their CEO, Drew, co-founder and CEO, is a developer and still codes. So having founders or executive leaders who were or are developers themselves certainly seems to be a pattern. I also hear and observed challenges when companies maybe who began with technical founders evolve and when new executive leaders come in, so when there's a changing of the order. I know at Snowflake your CEO is fairly new, but as I understand, got tuned in to the state of developer experience, developer productivity pretty quickly. Can you share how this happened? Did you have any hand in that or your new CEO just do this on their own?
Amy Yuan: Yeah, I can start. So Sridhar came from the Neeva acquisition. Neeva being a startup, smaller code base, smaller team, it's pretty efficient, right? The code base is a lot smaller. They're able to push code at much, much faster velocity. So I think at the acquisition time when they started to making changes in the core Snowflake product, then they begin to see there is a difference in code complexity and of course velocity as well. And so this was very important to him before he became CEO, because he was on the senior staff before he took the CEO role. So when he became CEO, this, again just like our CO said, one of the top things he really wants to move the needle on. So he was very committed from the very beginning.
Gilad Turbahn: When you look at the scale of Snowflake, and it's true, I'll pick on Meta for one second, when Mark Zuckerberg came up with that Year of Efficiency, one of those five initiatives in the Year of Efficiency was developer productivity. And it was very clear because when you have a very large workforce that's engineering based and every 1% change to their productivity impacts so many people and can have such a big impact on your speed to market, on your bottom line, on your ability to retain employees, it becomes a top company priority. Sridhar very clearly said, "This is our way to have an impact on our entire engineering org, and frankly, on the company's top and bottom lines."
If we're pressed to get stuff out there in the market as quickly as possible and our developers are slowed down by it, then we need to unslow them down. We need to make Snowflake fast. That's why they're putting the amount of attention on it. That's why there's so much rigor around this investment. And the involvement, so Amy mentioned that Murali, our CTO, is heavily involved. It's because it's such a high priority project that the CTO needs to be aware of what is going on and make sure we have the right short-term and long-term solutions.
Abi Noda: That's really interesting, Gilad. I feel somewhat unique in terms of the circumstances I hear from other leaders in your role and certainly a tailwind, I think, to the work and journey you've been on. I think another tailwind in your journey has been the unique partnership that engineering and product, Amy, you and Gilad, have in leading this initiative. That's also not something we see at all organizations. Can you share a little bit about that partnership with listeners?
Gilad Turbahn: For sure. We both have our perspectives, so we'll both share. I think it starts actually from the way we even think about developer productivity. We're thinking about developer productivity just like you think of any other problem that needs to get solved, and there are products that we build in order to solve those problems. That's where PMs come in. There's a perception in the industry that says, "Well, if engineers build for engineers, they know exactly what to build." And yes, they're absolutely right, they can build solutions that are going to solve the problems that a specific developer is complaining about. But when you're trying to build something at scale that's going to be relevant not just for a specific person that's complaining, but rather is going to solve the underlying pain point for the breadth of all the developers in the company or a large subset of those developers, that's where product people come in.
What I really love about this partnership is that everything that I just explained, I don't actually need to explain it, it's just a given. It's, "Oh, that's the way we operate. Of course this is how we work." And that's what makes this partnership so unique, because we come to the table knowing that we are now leading an org that's doing... By the way to be clear, the org is doing the work we get to report and celebrate. It's nice, it's always nice to be in that position. But the team is doing the work, but every team has PMs embedded in them. Every team has data scientists embedded in them. Every team works directly with our documentation because to us it's not just engineering work, it's we're delivering value to customers. And for that, you need all the disciplines involved.
The partnership comes from it's just known that we are building value, not delivering an engineering solution for a small problem. And once you deliver value, it has to include the marketing communication to the developers, it has to include the closing of the loop, it has to include the metrics and the measurements, it has to include the technical details in the architecture that you go with. And we work on all of these together, so that's what makes it unique. By the way, I also love the people and the dynamic, which makes it even more fun.
Amy Yuan: Yeah, I like to tell my team, "It doesn't matter whether we're building an internal or external facing product. In the end we're building a product, and we all have customers." In this case, our customers are engineers. The benefit of that is it's very easy to talk to your customers, they're actually sitting next to you. And the benefit of such closer partnership with our PMs, especially with someone like Gilad who has experience as a developer, as building external-facing product go to market and also have actually worked at Meta in this exact space, he knows what the customer needs and how we need to be data-driven and how we need to be very customer outcome focused and not just be like an internal engineering project, we have a list of work items together [inaudible 00:12:39] done. Because that tight feedback loop, the connection with our customers is in the end, what's going to make sure we're building the right, again, product for our customers. Any product in the end is adoption and feedback. That means a successful product.
And then PM can help keep us focused on our customers and then making sure we're building the right thing and then iterate based on that feedback.
Abi Noda: Well, it's a strong testimonial from you, Amy, on the importance of the partnership, and the product management muscle is part of this effort. But as I think we all know, I think we're all asked probably from time to time from organizations that don't have PMs involved in developer productivity whether PMs should be involved in this type of thing. So Gilad, what's your take on that?
Gilad Turbahn: I'm a little biased kind of bit. Let's talk about what problems are PMs here to solve. Let's take two examples. One is the very basic, okay, at the end of the day, we need to get some customer insights. Let's assume that the build experience is not great or the coding experience itself is not great. So somebody, doesn't matter what their title is, needs to go and sit down with a bunch of developers who are experiencing this and figure out what should they do about it. That part of the role doesn't change. The thing is, I say that with love because engineers are way more capable than I am in doing those things, when an engineer tends to see a problem, the first instinct is, "Let's go solve it." That's not the first PM instinct.
The first PM instinct is, "Okay, is what they're complaining about the actual problem or is there an underlying problem here? Who else is this going to be relevant to?"
My job as a PM or my team's jobs as PMs are to go in and then know how to ask that same question in a way that'll help tease it out. When I go and talk to people, because developers have their very narrow lens, I work on a specific thing on a specific day. But is the problem I'm facing the one that the person from team A is facing and the same one that a much more senior developer from team B is facing? That's a big thing that PMs can help do. So that's part one.
Part two is the predictability and the communication around it. If you think about it, like any other customer, by the way, it's harder with internal developers, because internal developers are a captive audience, developers as a, forgive me, I'm going to call them a species for a second, complain a lot. I know, I used to be one of them. I still retain that capability. And in order to get them to be satisfied, they need to work hard. Now, with internal tools, it's not like they get a choice. You tell them what to use and they'll use it. So getting them to be delighted is really, really difficult.
Where PMs come in and where we have the ability to make a difference is by giving people visibility, not just into we're solving your existing problem, but rather we listen. Here's the information that we see when we talk to a bunch of developers. We publish that information. When we run our surveys, we tell the developers in the company using our product, all hands using other mechanisms, "Hey, here are the top priority issues that you've raised, and here's what we're going to do about it." And then we communicate dates, not in order to hold our engineering team accountable, but rather for people to know it's not just that somebody says what you told us is important, we're actively working on it and you should expect to see a change by around this time.
And then when we're able to close the loop and message to people, "Hey, remember when you told us that you had this problem? We've now fixed it. Go and check it out," suddenly they know they're in good hands. Suddenly they know that there's an actual team doing a lot of the work, and we know that we can reach out to them with the next issue. We know that we can think about what else matters and ask us to go and fix it. So that's where product management comes in, on the communication, on the understanding of the pain points, and then making sure that we always tie the rest of the groups together. Because metrics, we want to make sure we measure stuff. Let's pick on an example. It's not just about measuring how long does it take the system to build code. It's from the moment the developer click the button Build until they can actually run a test. That's the time that matters.
Even if the system took 15 seconds, but because there's a flaw in how we designed it and it only updates three minutes later, the developer doesn't care that the system took 15 seconds, they care about the three full minutes. So being able to measure things from the developer perspective and then translate that into what work should we do and how do we measure our success, all of these things come together, that's where product can add value.
Amy Yuan: Add two things that might not be obvious when people don't have PM working on such project. One is roadmap. PM are very good at building products that go to market, look at the longer horizon. Again, not just from the engineering, we're building a cool thing, but again, looking at the product, the customer lens, what is the roadmap of this product this quarter, next quarter, 18 months? So with our PM, our engineers in partnership have roadmap for every product that we now have. And that also makes it easy when we talk to either our partners who take dependency on or our customers, "This is what you can expect. This is what good looks like now, what accents will look like in the future."
And the next one that, again, might not be obvious is storytelling. For any sort of migration, especially internal changing people's diehard habit, we really need to convince them, show them what that looks like. And storytelling is actually pretty important. In a lot of the migration projects that we have, the organizational challenge to change people's behavior, it's sometimes actually the hardest part, and storytelling is very important part of that. And PM play a very important role with engineer to help us create that storyline.
What is the journey we have been on and where we are taking you? With data, with charts, at the end of it, I think sometimes our engineers are convinced by both testimonials of their peer engineers as well as data, and then we need to give it a try. And then with PMs' help, we have a very tight feedback loop to make sure they feel we listen, they're heard, and we're making progress. And all of that will build momentum.
Abi Noda: And as you embarked on this journey, we've talked about a lot of the conditions which have contributed to its success and what initiated this becoming a P0 at the company. But I understand as you embarked on this journey there were several key areas you focused on or sought to change. One in particular was the shift from, as you've described it, from being a feature factory to being more customer outcome oriented. Share more of what you mean by this.
Gilad Turbahn: There are several things. First of all, it's a matter of how you measure the team has done work. So let's go with feature factory first. If we're going for a feature factory, calling something shipped is success. I said, "Okay, I have delivered our new... " Let me pick on, sure, build system, "I have delivered our private preview version of the build system. Check. I'm done, I'm good." Does it matter if somebody used it or not? No. Does it matter if anybody's happy about what we've done? No. Because the check is for being able to ship on time. That's feature factory thinking mode.
One of the shifts is indeed to go into that customer value mode. We don't just care about being able to ship on time, which is important, but rather, are people using it? Are we measuring adoption? Are people happy about what they're seeing? What type of feedback are we getting? How do we inform the next set of steps? And then in the customer outcome mode, there are always metrics to measure things from a user's perspective, as we said, but the metrics are a way to measure whether we succeeded, they're not the goal. That's also a very important one, because sometimes people say, "Okay, we'll just chase getting the metric up or getting the metric down." No, no, no, it's just the way to measure whether we've generated the right type of impact. So it starts from the definition.
By the way, one of the main things we've done for TypeFaster back in November and December and
January was to say, "How do we measure, and what areas should we measure for, and which specific metrics are the ones to focus on?" For Bazel, we knew that it's going to have to be around adoption of the new technology. We only had about, I don't know, percentages in the teens in the company that used it, and other people used the old build systems, so we had to have adoption. But we also wanted to make sure people can do their job, and that means how long does it take from the moment you tell the code build itself until you can run a test? So we wanted to measure that end to end.
For our testing framework, we wanted people to migrate to that new system, but we also wanted to make sure that it helps reduce the flakiness of the tests. There's always a combination of adoption metrics of a new technology with customer experience around reliability or around performance, around functional correctness, sometimes around efficiency, and sometimes it's the more verbal user experience, "Yes, I enjoy using this, I know how to use it." So we've defined all of those as very clear metrics. We have a scorecard to measure those quarter over quarter. But our goal is to also look at the testimonials and look at the sentiment of developers and see if that changes over time because that also tells us we didn't just change the specific metric, we changed how users perceive us, and that's the customer outcome.
Abi Noda: Amy, did you want to add anything?
Amy Yuan: Yeah, so I'd say when we are not focused on customer outcome it's very easily a sprint of work items. When we're focused on customer outcome, we are always thinking about what are the top customer pain points. And that list will always change because it's like peeling an onion. Because that will make sure, again, every quarter or even every two weeks, we are addressing what customers are telling us, because that helps build trust, because they know things are getting better. If we in the past sometimes we might go dark because we're very focused on building this engineering innovation, and that in the end will not help moving the needle on both fronts. We need to build a solution that's ready, but we also need to make sure we're improving the customer experience incrementally week by week or quarter by quarter.
Gilad Turbahn: There's actually one more thing that I wanted to add. I think the concept came from Marty Cagan, if I remember the source correctly, and we can probably check. There's a change between owning the solution and owning the problem space. And that's a major shift that happened behind the scenes in order to go from a feature factory into a customer outcome solution. When you own the solution, what you're responsible for is delivering whatever you've been told or whatever you have designed and decided to build. When you own the problem, you go sleepless when a customer feels bad about a solution or when a customer feels bad about an experience. It's no longer, "I am responsible for the cloud workspaces," but rather "I'm responsible for allowing developers to have an environment where they can do work." It doesn't matter which environment. On their own laptop for all sakes of the matter without a cloud workspace, if they have a bad experience working with an environment, then we need to solve it for them.
Hence that ownership mindset makes a huge difference for people. That's how we got people to stop thinking, "I am just responsible for Bazel. I'm just responsible for Snowfort. I'm just... " No, no, no, no, no, you are responsible for all the challenges in the testing space. You are responsible for all the challenges in the inner loop development. And that's how the PMs are aligned, and that's how our TLs are aligned, and that's how our EMs are aligned. Once they have that mindset, suddenly we start investing in other things. Suddenly we say, "Actually, de-prioritize this longer term investment in order to solve a short term problem," or vice versa, "We hear about this short term, it's going to get solved by this longer term initiative. We're okay with it." But we're making decisions not in the context of the specific product I'm working on, but rather in the problem space aspect.
Amy Yuan: Yeah. It also means that we don't always look at are we measuring the customer lived experience. Because if you are just building an engineering product, engineering solution, you measure the things that you will care about, maybe success rate or latency. But then when you start talking to customer, you see what we measure does not match their experience. We'll think things are better and they will think they're the same. Then you begin to see, "Okay, I'm not matching the right thing. I'm not maybe matching the end-to-end journey. Maybe they're parts of the journey that I think I'm not responsible for but still is a poor customer experience. Who is responsible for that?" The ownership means you own the end-to-end customer experience, and you need to imagine that, you need to find problems in that. You need to solve the whole problem.
Abi Noda: That's a really good segue into the next topic, which we've already touched on several times in this conversation, which is measuring. Gilad, earlier you talked about one example of how measuring just the system-recorded build time wasn't telling the full story around what developers were actually experiencing. And as I understand it, there's been a pretty heavy focus in terms of your shift in measurement toward more user-provided or sentiment data. So can you just say more about the shift you've made and contrast your current approach with the previous?
Gilad Turbahn: For sure. Several things again. First of all, let's talk about what do you measure when it comes to internal tools, what categories of things, not what specific metrics. If you look at categories, there are five major levers that you can pull on in order to improve developer productivity. It's everything that has to do with speed or latency or performance, everything that has to do with the reliability of the system.
Does it work when you ask it to do work? Anything that has to do with efficiency, how costly is it for you to do any type of activity? Functional correctness, when you ask it to do work, does it do what it's supposed to be doing or something else? Think of go-to definition. Every time you click on the definition, if it navigates you to the wrong place, technically it's reliable, it takes you to a place, but it's completely not functional correct because it took you to the wrong place. And then usability.
So those are the five categories, and with those five categories you say, "Okay, what level of metrics do we want to define for each product based on where's it at in its current stage of the product cycle, what prompts are users facing and are most important for them, and then what's the impact of pulling on one lever on the different lever? For instance, prioritize speed over reliability. You can go very, very fast at 20% reliability. Just ignore the rest of the cases. So you need to think about those levers and how did they affect each other.
What we've done practically is we said, "Okay, for each one of the areas we're responsible for, let's make sure we have the very basic product operations metrics," because you've got to have those. But then the second layer is to define those user perceived metrics, that's what we said, those end to end from the moment I said build until the build is done and I can run a test. From the moment I want to start coding until the environment is ready for me and I can start writing the first line of code in the IDE, not just the time that it took to spin up the cloud workspace. So that's the user perceived metrics.
Then the third layer is the end-to-end measurements, or some people call them output metrics. There's subcategories in there, but a good example is, from the moment I said I want to submit my code, it's ready for it to get merged to main. Until it gets merged to main or until it goes to production, let's measure those. So there are multiple processes that go into that one. We run our merge gates, we have people do code reviews, and when you measure those end to end and you have the more granular ones, you're suddenly able to see, hey, if we made changes to the granular ones but not to the larger ones, there's probably a different thing hiding in there. That's how we found that the time that it takes to have the first code review happen is quite long, and we enabled notifications that told people, "There's a review waiting on you," which helped shorten that time. We essentially just checked for everything else, fixed it, and then we saw that additional layer.
And then the fourth and final layer, I'm pointing down, it's actually up, the most important top layer in this case is the user satisfaction and where the pain points are. What we've done there is two things. First of all, we went for the standardized CSAT type format survey. We send it out quarterly. We get really good data, and it allows us to show improvements time over time. Specifically, we're very lucky to be able to say that within the last six months we have shown improvement that's higher than 15 percentage points from where we started to where we're at today, which is awesome and, again, testament to the great work the team has done and our volunteers and everybody who's been involved. So that's theirs to claim.
But we go a level deeper. We look at where people are struggling. We use a question called struggle areas where we ask people, "Here's a list of 14 or 15 different places where you could be struggling with your experience." Emphasis areas, not products you're struggling with. We don't ask about struggling with Bazel, we asked about struggling with interloop development. And we ask people to rate those areas. When you look at the same question time over time, you see that the percent of people complaining about each area starts varying, which means you're decreasing how much this [inaudible 00:33:03] area of complaint and increasing another area of complaint because, hey, people are always going to complain. So we're not worried there, there's still work to be done. And then you cross-reference that with the comments and you look and you see... So I'll pick on our cloud workspaces... or sorry... Yeah, I'll pick on our cloud workspaces.
One of the biggest pain points was the ability to sync with the IDE. No, I'm picking on Bazel, see? So our ability to sync Bazel with the IDE. And it's something that we've not invested in enough in Q1. So it was the top comment in that area, and we saw that the amount of people who were saying, "This is a top challenge for me," again in the code building area, went down, but the top comment didn't change. It kept on being that same thing. Once we saw that happen again and again, it told us that the improvements we're making are not enough to move the needle. So we decided to change what the team's working on. We decided to set a specific metric for that specific area and to see not how much time, but rather how many people are impacted by this.
When you change the lens and you see, "Oh, it's people in these specific offices," and you do a deep dive with data science, this is where the rigor comes in, you do a deep dive with data science, you see how many people it applies to, you see what IDEs is this relevant for. You go much deeper and then suddenly you can see, ah, things are starting to move around. Suddenly the metric changes. We're waiting to see the new survey results to see if it's no longer the top priority area. But that's exactly the thing. So you look at metrics at all of these levels. You think of all those five categories that I mentioned, and then you go deep once you get data time over time in order to assess and see if you move the needle overall.
Abi Noda: I want to call out for listeners, I think two really interesting strategies you mentioned. You talked about how you focus on the user satisfaction CSAT scores, but you go beyond that in focusing on two things. One being the stack-ranked list of pain points, highest pain points reported by developers. But even more interesting to me, and I don't think I've heard this one, is you do a analysis of the prevalence of specific keywords or terms or topics in the open free text comments and you look at the prevalence of certain themes in those comments over time as well to tell a story around the data and also measure success. So I found both of those approaches really insightful. Just wanted to call that out for listeners.
Gilad Turbahn: One last thing on this one... Sorry, Amy. One last thing on this one. There's even extra ingredient on that one, because when we run our surveys, we don't do them anonymously, we do them confidentially, as in we're not using that data to share it with anyone, but we do want to be able to reach out to users who complained about stuff so we can do deeper dive user interviews and really understand what does their complaint translate into in real life. That's also our ability to close the loop with them, because if they complained about something and we know we've made a change, we can reach out to them and tell them, "Hey, remember when you told us about that?" And that's the trust building and the momentum building thing as well.
Amy Yuan: Yeah, one more thing I'll add is the survey is only every quarter. For us to understand what customers feeling every quarter is not enough. So that's why our PM also have a CAB, customer advisory board. These are early adopters or passionate dissenters, very opinionated about how things should be. So we have a good sampling of them, different product groups, different ID, different programming language, different journey in terms of adoption and also different geolocation because it does matter.
Different geolocation have different latency connecting to maybe where our data center is, where our build farm is. We do periodic checking with them. Sometimes it is a 45-minute in-depth interview. They even show us, "This is what I do. This is why this doesn't work. This is my workflow." Because not everyone's workflow is the same. Some people have their own tools or scripts and then they're annoyed when something stopped working will become slower. And sometimes it is drive-by Slack checking, "Are things better? This has been rolled out. Have you tried it?"
Gilad and I recently visited our engineering offices in Toronto, in Warsaw, and in Berlin. Again, being face-to-face in person at those locations, we get to see how they work. We get to see whether there are any differences being in the US locations. Sometimes there are. For example, the internet service provider is configured very differently. Some people they want to work in trains. They want us to support disconnected scenario. So go into where customers are and really understand their experiences.
And then when we measure just the operational metrics as well, we measure P50 and P90, because we want to understand what is the average experience and what are some outliers. The outliers are important as well, because once we see outliers, like what Gilad was saying, we will do deep dive. We'll try understand what are the different cohorts, what makes them so different because we also do weekly metric analysis, operational review. For all the dashboard metrics that we talk about, we review them every single week. And then if we see spikes or dips, then we'll see if this is a trend, if this is something we need to look at. When we look at adoption trend, we also see churn trend. Are there people who have tried it in the last two weeks? But then more people are actually leaving. Is it because the experience is bad? We brought our regression, we want to nip in the bud. And this is how we are able to drive adoption and then progress. Sometimes we'll have minor setbacks as well.
Abi Noda: I wanted to ask a question. A lot of organizations that I've spoken with struggle to get buy-in on sentiment data, sentiment metrics. Gilad, I know you mentioned to me that this was one of your biggest challenges at Meta as well. Can you share what's your advice for overcoming this challenge?
Gilad Turbahn: It's a combination of top-down, bottoms-up, and infusing culture. Now let me explain. Let's talk about the challenge first. At the end of the day, Meta is a company that measures people on their impact, and it is very hard to convince anybody to say, "Okay, I'm tying my bonus to something that is dependent on something that somebody says three months after I did the work." That's just the problem. It's not a Meta-specific problem, I just ran into it there. So instead, what we've done here is we, first of all, made sure that we are all aligned, that we're singing the same tune, that we all care about it and we, our own bonuses, our own goals, OKR at the organizational level, is tied to this. Amy is accountable for this, I'm accountable for this. Nathan, our chief architect, is accountable for being able to drive up sentiment. So we're not just asking the team to commit to it, we're doing it. That's one that's top-down.
There's another aspect of the top-down of reminding people. My role in the company is the CRO, the Chief Repetition officer. I'll keep reminding people again and again, "This is where we're at. This is what we're seeing for your area. See how things change." We look for Xs in the graphs where dissatisfaction crosses with satisfaction. That's how we energize people. So that's part of the top-down.
The bottoms-up comes from choosing TLs and choosing EMs that truly care and understand things from a customer perspective. We are extremely blessed and lucky to have those types of folks on the team, because their demeanor is not, "I have a technical problem that I need to solve." Their demeanor is, "I have a customer-impacting thing that I want to do." And that's a huge change. So it's part of our interviewing, it's part of making sure we hire the right folks. When you have that layer and you have that type of thinking, you're able to drive that change within the organization. And then they feel accountable because they don't care if they delivered system A or B, they care whether they changed the sentiment because that's the bottom line. Whether they have improved their metrics because we know a user perceived metric will have a direct effect on the sentiment and on the struggle areas and everything we do.
And then it's about the communication internally within our organization. That's what we do. We talk. So every time we have all hands at Amy's level, we go and we share with the team, "Here's what we showed all the way to the leadership level. Here's the real data. Here's what matters to our company's leadership. Here's what we're measuring right now. Here's what we're hearing from customers." Everything is transparent and open to the team because we want them to see here are the things that we care about day in, day out. Here are the things that leadership cares about.
We just had an all hands. Again, the team was able to deliver incredible results in the last couple of quarters. If you look at the adoption for the three main new systems, for our cloud workspaces system, for Bazel, for Snowfort, for our internal regression testing framework, we've gone from the teens to the 60s, 70s, 80s in six months time, in nine months time. That is incredible. That's something the team is doing.
So Amy pulled in our founder to come in. Hey, it's the founder who set it in the leadership meeting that he calls this a night and day difference. Great, we sit in the room, we get to hear about it and be proud of the team, but we wanted the team to hear about it. So Amy pulled in Benoit, and he came in and talked to the team about why does it matter to him, why does he see this as a massive difference? So all three aspects, the top-down, the bottoms up and picking the right people at those EM leadership positions. And of course PMs and data science and docs and TPM, all of those layers are in there and are very critical that we pick the right folks for it. And then the communication across the org very frequently.
Amy Yuan: Yeah, I think a lot of times we hire people who are very passionate about the DevOps space. Dome of the recent people we interviewed, they said they were working on customer facing feature. Then they're not at Snowflake yet and they're super annoyed about how build snow or test was difficult. So then they basically spend their personal time working on that and become so interested they said, "No, no, let me go work on that project and that should be my project because I want to make it better for me and for everyone around me." That's the sort of people we want to hire. That passion and customer empathy is what makes any product successful.
Abi Noda: Do you find it difficult or have you run into problematic scenarios where goal setting against sentiment is problematic because of how it can be potentially influenced by factors outside of your control? For example, just biases that may be introduced in the way people are rating certain things or even participation rates and maybe not getting the right sample or the same sample as a prior baseline?
Amy Yuan: Yeah, we're not setting goals based on sentiment. Our goals are very much to the engineers will feel like within their control, are the metrics that we define together. The sentiment is moved by product is the outcome that we try to drive. But we don't try to say, "For this quarter, our goal is to move X percentage points."
Gilad Turbahn: It's the long-term goal, not the short-term quarterly one. To answer the specific of what you asked. So don't launch a survey a day before the company goes through layoff. That usually hurts survey responses. Amy gets the credit for this one. We do a bunch of stuff to get people to respond to the surveys. We have challenges for teams. We actually promote it internally and make our founders and our EVP of engineering wear costumes if we hit a certain participation rate. It's something that we want to celebrate that we're measuring our engineering experience because it's such a critical thing. And we make it into something that people are excited about. Today I get to stand up in front of folks in the product all hands and have six minutes to talk about the five-minute survey that everybody took. When you see the results and we were fortunate, people are responding, they're telling us what's wrong, and we get to fix it. And they show us that we're making a difference, and again, the team is doing the work, to be clear, I get to present it. That's what I get to do.
Amy Yuan: Yeah, I think again the important part is closing the loop. We show people every time after the survey how their feedback help us do the next quarterly planning. And that's what we tell them every single time as well, "Please give us your open and honest feedback. We're not trying to only get the good feedback. We honestly want to hear what are your pain points because this will help us work on the right thing for you in the next quarter." And we show, "This is your feedback and this is where you land in the next quarter planning as well." I think that helps motivate people to give us very honest and very balanced feedback.
Gilad Turbahn: I'll give you the best quote that we got from a developer back in Germany when we were visiting, "I'm still complaining, but I'm complaining about different things than what I complained about before," which is very good.
Abi Noda: Yes, that's a strong endorsement for the work you're doing. I want to ask, earlier we talked a little bit about how do you get buy-in for sentiment data, and you shared multiple factors that go into that and as well as the anecdote about your founder getting out on all hands or a call and talking about the importance of your work overall. I want to ask you, how do you think about understanding or translating sentiment data into dollars or business impact, business outcomes? Again, I've seen some of your slides and presentations, huge emphasis on the increases you've been able to drive to satisfaction and sentiment. How do you tell a story around that? How do you tie that to real impact in the business?
Gilad Turbahn: It's a combo. First of all, and we'll admit, there are cases where it's harder to tie it together. There are cases where it's easier. So for instance, when we're improving, we have a process called pre-commit, that essentially is something that every developer runs before they submit their code to be merged so they can run the testing suite on it. When we improve the amount of time spent within that process by focusing on the right type of tests instead of just run the whole suite and take forever, we're saving the company money. You don't need to run every single thing, so you're saving the company money. And of course it helps improve sentiment because if I'm waiting on something for an hour instead of waiting on it for five hours, I'm happier and it also changes the way I do work. You actually see that not just in the sentiment, but also in the way that people go about doing work.
Another area for sentiment and where it impacts the dollars at the end of the day is on maintenance costs. Think about the number of people that need to work and keep the lights on for older systems. If we're able to drive that adoption and people are happy so they don't switch back... And that's one of the things we're blessed in seeing right now percentage-wise, and we measure this pretty much on a daily basis., You see that most of the people who migrated over to the new systems are 100% users of those new systems. They don't switch back. Or when they use both, it's about a single digit percentage of the people who switched. So of course we need to target them, we need to understand what they're doing. All that is true, but when they switch, they're happy about it and they tell it to us that they're happy about it, that saves us money.
You no longer need to support old systems. You no longer have to that knowledge internally. And you can create new paths for people that you recommend, "Here's what you should be using for your development journey." That's what we've been able to do because we have people in the company that are now satisfied and want to champion here's how you do work.
Amy Yuan: Yeah, some of the intention will be help your engineers and helps with retention and more productive engineers being able to shape features faster and maybe with higher quality as well. It's hard to put a very concrete metric around that, and I think we are very fortunate that our leadership understand that They're not asking us to come up with a metric to quantify those aspects of outcome.
Abi Noda: Shifting topics again a little bit, we've talked about how this journey began and a lot of the ingredients key to your success and a deep dive into how you think about measurement, but I know through the stories you've told me that there's been some additional key unlocks that have helped you all break through and achieve success, so I want to talk about a couple of those. One thing that's interesting to me about some of these unlocks is that they actually came into the picture later in your journey. They weren't initially necessarily present as factors, but as you move through your journey, some of these additional ingredients for success manifested themselves. And so the first I wanted to ask you about is, I understand that your CTO became directly involved in this initiative, can you share more what you mean by this and how did this come about?
Gilad Turbahn: Want to talk about the beginnings?
Amy Yuan: Yeah, sure. So when Murali became CTO, he very clearly took this as his first project. What that meant to him is, yeah, it's not I'm the executive sponsor, you guys bring updates, is I sit in the actual meetings, in the actual review meetings every week with every one of the work streams and they're in the trenches helping the team design or helping resolve problems or any type. So the teams also get the benefit of somebody who's that experienced and that hands on still in the meetings. I get to sit in a lot of those. I see it day in, day out when he comes in and he's able to give guidance, "Here's how we should design our architect dot system. Here's the trade off that we need to be making. He has all those considerations and he knows how it impacts the other work streams, so his involvement from day one made us a lot better.
To be clear, it's not to replace the TLs in the space. We have very strong TLs. And it's not to replace the volunteers. The volunteers in this space have made all the difference in the world. They allowed us to go a lot faster and collaborate between teams that otherwise wouldn't have collaborated. He's more the orchestrator and the judge in the room that helps resolve issues that has all the technical understanding, so that's how it helped accelerate things.
Gilad Turbahn: Yeah. I'll add two things. I would say having Murali kickstart a lot of the work streams I think also gave us our cover on what we are going to solve and also what we are not going to solve, because we couldn't boil the ocean. The team at that time was not staff to boil the ocean, so he was very clear about we are going to solve this top pain points. We're not going to make every developer at Snowflake happy yet. We're going to focus on this cohort, and this is going to be our goal. And also he looks beyond what the productivity team will do. He looks actually at the entire product as well. What are the places in the core engine, for example, that the volunteers can do to improve the unit test time integration test time, set up time, product, startup time. He was looking at across the board and staff work streams with volunteers across the company, and that is what made possible a lot of the progress we were able to make in the last six to nine months.
Abi Noda: I want to ask more about that because I understand that was another one of these factors that came into the picture later in the journey, was not just identifying champions but actually getting volunteers from across the company to aid in the work. Share how you were able to accomplish this.
Gilad Turbahn: Lots of peer pressure. No. In order to get something like this done, you need other teams to be involved because they see what the day-to-day looks like in their daily operations. They're the people who are passionate about solving those build problems, those cloud workspaces problems. They're the ones who can unlock the picture for us. That's where the volunteers came in and how important they were. The collaboration between those factors, between Murali the CTO, and the architect on the engineering system side, that's our team, the one that's dedicated to working on this space and folks from cloud engineering that helped us pick it up and build that high quality cloud workspace that now spins up in a matter of minutes instead of a lot more than that, that collaboration is what makes difference. Some of it we had to do in a more top-down way and tell people, please land us folks, and it came from company leadership and it helps.
In some cases we opened it up and volunteers came in. In the third case it was more about visibility, as in the fact that we give them visibility with somebody who's so appreciated, like Murali, like our CTO with our founders, them being able to get more exposure and their results counting towards their ratings. All of those factors had to come in order for this to work and be effective.
Abi Noda: Amy, were you going to say something?
Amy Yuan: No, I was just going to say the volunteers I think they're motivated by a few things. Making their own life better. And then I think they were also motivated in, "Hey, why is this thing so hard? Let me go figure it out and maybe just learn something new, something different, and I think also some of them will motivate to have the chance to work very closely with CTO Murali as well.
Abi Noda: Well, Gilad and Amy, thanks for this in-depth telling of this incredible journey that you've been on for the past eight months. I'm really excited to continue following your journey and learning from all your innovative ways of approaching this problem at Snowflake. Thanks so much for coming on the show today and spending your time here.
Gilad Turbahn: For sure.
Amy Yuan: Thank you for having us.
Gilad Turbahn: Thank you.
Amy Yuan: Thank you.
Abi Noda: Thank you so much for listening to this week's episode. As always, you can find detailed show notes and other content at our website, getdx.com. If you enjoyed this episode, please subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Please also consider rating our show since this helps more listeners discover our podcast. Thanks again, and I'll see you in the next episode.