Engineering Enablement by Abi Noda

This week’s episode is a recording from a recent event hosted by Abi Noda (CEO of DX) and Laura Tacho (CTO at DX). The episode begins with an overview of the DORA, SPACE, and DevEx frameworks, including where they overlap and common misconceptions about each. Laura and Abi discuss the advantages and drawbacks of each framework, then discuss how to choose which framework to use.

Mentions and Links: 
Discussion points:
  • 2:50- DORA, SPACE, DevEx overview
  • 10:35- Choosing which framework to use
  • 13:15- Using DORA
  • 22:42-Using SPACE

Creators & Guests

Host
Abi Noda
Abi is the founder and CEO of DX (getdx.com), which helps engineering leaders measure and improve developer experience. Abi formerly founded Pull Panda, which was acquired by GitHub.

What is Engineering Enablement by Abi Noda?

This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.

Laura Tacho: Hello to everyone. My name is Laura Tacho. I'm the CTO at DX. We are a developer insights company. I'm joined here by Abi. Abi, I'll let you introduce yourself.

Abi Noda:Thanks, Laura. Yeah, I'm co-founder and CEO of DX and spent way too many years of my life in this space trying to figure out how to measure productivity as you have as well. So excited for the discussion.

Laura Tacho: Yeah. So Abi and I are really glad that you've taken some time out of your day to join us for a discussion on developer productivity metrics, frameworks. So we're going to be discussing during this time DORA, SPACE, and the DevEx framework. And what we're going to do is go through a bit of overview of what these frameworks are. We also want to touch on who are they best suited for, when should you pick them? Do you need to pick one? Some of these really pressing questions that I know a lot of you have, and that's why you're taking time to be with us today. So we're here to talk about DORA, SPACE and the DevEx framework, and Abi and I met ahead of this workshop and we thought let's just start with a brief overview of what each of these are before getting into the very more interesting details about them or ways that you can apply them.

So I want to kick off with talking about DORA metrics. I know from the responses that many of you sent to me about which metrics that you're already using, a lot of you already know about DORA metrics, but if you don't know about Dora metrics, DORA stands for DevOps Research and Assessment. These metrics are a set of four metrics that have more or less become a standard in the way that we measure software organization performance and software delivery capabilities. They're not necessarily developer productivity metrics, but a lot of companies use them that way or use them as a proxy for developer productivity. So those four DORA metrics that you're going to find are lead time to change, we have change failure rate, deployment frequency, and then mean time to recover. The MTTR one has been slightly changed to failed deployment recovery time. So I might say MTTR just because that's sort of the old name. The newer name coming out from the state of DevOps report 2023 is failed deployment recovery time.

The main benefit of DORA metrics is that they are standardized. So it's a set of four metrics. Every company uses the same four metrics for DORA metrics you get benchmarking across the industry. They're very valuable for a lot of teams and a lot of companies and assessing their organizational performance and their DevOps capabilities. On the other end of the spectrums, we have DORA as something that's highly standardized. The SPACE framework of developer productivity is a little bit newer. This was published in 2021, and the SPACE framework of developer productivity is a very open-ended broad framework that covers kind of a holistic, comprehensive view of how we should be thinking about developer productivity.

So there's five categories. One for each of the letters, we have satisfaction and wellbeing, performance, activity, communication and collaboration, and then efficiency and flow. And the idea with SPACE is that it's meant to give you some mental models and some vocabulary to think about what your definition of developer productivity is and then therefore how you should measure it, and which metrics that you can pick that line up to these SPACE dimensions. We're actually very lucky to have Abi here because he is an author of the DevEx framework and did a lot of research on that. So Abi, I'll let you introduce the DevEx framework and tell us just a little bit about it.

Abi Noda: Yeah, thanks for the overview on DORA and SPACE. That was great. One of the things that's interesting about the DevEx framework, the DevEx paper we published earlier last year is Nicole the creator of DORA and also the lead author of the SPACE framework and Margaret-Anne Storey, co-author of SPACE framework were also co-authors on the DevEx framework. So there's a little bit of continuity in all these different frameworks and ways of thinking about developer productivity, and I think that's something some of the audience is probably aware of. The concept around developer experience, and I could talk about that forever, but the goal of the paper was to provide a really practical way of thinking about measuring and improving developer productivity through the lens of developers. So whereas DORA takes sort of a system-focused approach and SPACE provides a very broad way to think about productivity. DevEx is really focused on more of a how-to field guide, a practical approach to applying a lot of the concepts from SPACE and DORA as we'll talk about later.

And we created it for a lot of reasons. One being that as we're all here talking about this topic today, it's a really difficult problem, and I wouldn't say that any of us feel like we've truly solved it. And also as direct response to what we were hearing from folks about the SPACE framework, we'll talk about this more later, but SPACE really opened people's eyes to a new, broader way of thinking about developer productivity. It also opened their minds up to thinking about developer productivity in terms of the perceptions of developers, but there was still a gap or a need, a lot of questions from people on, well, how do we actually do SPACE? How do you actually measure developer perceptions and combine it with quantitative data? And so the DevEx framework was an attempt to, and also leaning on other research we've done into developer experience specifically, provide a practical framework for actually doing that in your organization. So that's a two-minute introduction to the DevEx framework.

Laura Tacho: Yeah. And Abi, I like how you called out that this is sort of on the same arc of research because we'll talk a little bit about what that means in practice when we talk about should we pick one or do we have to pick one? How do they relate to each other? So I'm glad to hear you call that out. It's really interesting. So I read through all of your responses and those of you who had the time to respond to me and share what you're using, really appreciate it, but a lot of people, and I'll put the results up for you all right now. So of those of you who registered and are attending here, 30% of you aren't using anything right now, not using any measurements, not collecting any metrics, you're just starting out in your journey. So you're definitely in the right place.

The rest of the 70% of you, a lot of you are using mixes already, highly aligned on DORA, I would say. So 64% of people who registered for this and took the time to respond are using DORA. And then we have 29% of you are already using the DevEx framework as well. Interestingly enough, there were a very, very small percentage. 4% of you said that you were using SPACE, and then 38% of you said that you're using something else. We can think about everyone's using SPACE, whether or not you know because SPACE is this very broad framework, but I also think the 4% of people who said, oh, I'm using SPACE, it was only 4%, also kind of calls out that there's a lot of maybe misconception about what SPACE is and what SPACE metrics are and how they're used. I'm curious about your opinion on that, Abi.

Abi Noda: Yeah, as you were talking about it earlier, SPACE is very broad. And I remember having a recent conversation where someone pointed out that SPACE doesn't even just apply to developer productivity. You could really use it to think about productivity for really any type of knowledge work or even non-knowledge work. And as you were pointing out, a lot of the other frameworks like DORA and DevEx and other frameworks we're not discussing today, you could think of them all as really falling within the space framework. So I know in conversation with Nicole recently, we were talking about this and she explained it as DORA being an instance of SPACE. And I really like that sort of mental model of thinking about this. Similarly, you could call DevEx an instance of SPACE. As you were saying, really SPACE is this umbrella framework for thinking about productivity generally, and all these other frameworks, DORA, DevEx really fit within it and are implementations of it.

Laura Tacho: Yeah. I think that's such a great way to think about it. And I also do think about SPACE is sort of like the periodic table, and you can put together a lot of different things from those elements in the periodic table. And some of them are really good, but some of them can also be very bad. And just because a metric fits into SPACE doesn't mean that it's quote, unquote a good metric. So I think a lot of us have been burned in our careers or just have strong gut reactions when we hear about lines of code or some of those other bad measurements. The truth is lines of code still fits into SPACE.

And so we'll untangle some of that, maybe misconception or how we use them as we get into this. I wanted to just start with a question, and it's a question that both you and I get pretty frequently. And this question is, well, which one should I pick? And so Abi, what do you have to say to people, those of you who are watching now who have that same question of should I pick DORA? Should I pick SPACE or should I pick DevEx? What should I do?

Abi Noda: Funny enough, I'm sure you do as well, but I literally get asked this question, and just a few weeks ago I was on a call with a leader at a large tech company who told me, our team is working on picking a framework. So this is a really common question, and my advice is to, and this isn't the simplest advice, but to not necessarily pick a framework. Right. If you're picking a framework, you're really approaching measurements in not the optimal way. Right. And we talk about this all the time. When you're thinking about how to measure developer effectiveness or engineering performance, you really want to start by defining your goals with measurement and how you're going to use the measurement. So then back into, okay, what are the right signals and metrics we can use to actually operationalize what we're trying to do?
And so my initial response is always, hold on a second. Before you choose a framework, talk to me about what you're trying to do. Are you looking for metrics for your developer experience program? Or is your CTO looking for metrics to understand system health? Really honing in on who the stakeholders are, who's the consumer of this information and what their goals are is really important before you get into a conversation about what framework.

Laura Tacho: Yeah, I couldn't agree more. Oftentimes when I work with leaders and I teach a course on developer productivity metrics, which I have worked with hundreds of leaders and helped coach them up on how to use these metrics and operationalize them. And it's very funny because the first thing in day one, minute one is what is developer productivity at your company? And no one has an answer because it's usually not the thing that you think about. You start with metrics and you want the metrics to do the heavy lifting of defining what productivity is, but it could be pretty problematic to outsource that core belief of what's valuable to your business to someone else or to a different framework. So start with your definition of productivity. What are you trying to do? And then see what are the measurements or what's the framework that supports the goals that you have for your team?

Abi Noda: And we'll talk about, there are good common use cases for these different frameworks. So we'll talk about that soon. But as we pointed out earlier, because all these things overlap, it's kind of a funny question to say, which one do we use? Because in a sense, no matter which one you use, well, at least with SPACE and DevEx, you'll be using some of the other ones as well.

Laura Tacho: Yeah. Why don't we get into some of those use cases, positives and negatives that we've seen out in practice because you and I both see a ton of different ways that all of these different frameworks have been implemented and how teams are using them, the benefits that they're having, some of the stumbling blocks. Let's just start with DORA because I think it's the most, well, first of all, it's the oldest and we have the most sort of data and anecdotes. What are some of the really positive use cases or really great ways that organizations have used DORA or do use DORA?

Abi Noda: Yeah, I think two come to mind because we've heard a lot of stories of where DORA metrics have either gone wrong or where companies feel like maybe they weren't the right thing to measure after they spent a good amount of time measuring them, but I think there are two patterns where I see DORA being really successful. One is that some organizations just have a really difficult time aligning on what they should measure, what people want to measure something. And I do think, and I just did a podcast episode with Airbnb and this was their exact scenario. DORA presents at least a starting point, it presents a industry standard, it presents a way to get through that blockade of disagreement and non-alignment around what the right things are to measure. So in that sense, even if DORA metrics aren't the most valuable and insightful thing for your organization to be measuring, it's a good place to start, especially if you're having trouble figuring out what's most appropriate.

And then the other scenario in which I see DORA metrics having been really successful is for organizations that are focused on DevOps transformation or who are going from batched releases to continuous delivery. And I think you can add more color to this, but this is really the context in which DORA was born, was during the DevOps movement. I mean, DevOps movement is still happening, but it was really a way to measure and benchmark the success of moving to continuous delivery and more agile ways of working. And it's really effective in those situations. Once you're sort of past that point of transformation in your organization, once you're already doing continuous delivery, I think that's where you see some of the diminishing value sometimes of DORA metrics and organizations starting to ask, so now what? Now what should we measure? Now what should we improve? That's my experience at least.

Laura Tacho: Yeah. I want to just scratch a little bit on what you said about it's a standard way that sort of can streamline the decision-making process about which metrics to pick. And there's, I mean, 30% of the folks who are watching this or attending aren't measuring stuff yet. And my advice to you all who are not using metrics, you have to build out your organization's muscle to use metrics, capital M metrics in general. What does it look like to be an organization that uses metrics to measure something? How do we talk about them? When do we talk about them? How do we use them? Who sees them? Who is evaluated by them? There's lots of different questions. Capital M metrics of just like it's a thing now in your organization.

And then there's the lowercase M metrics. What are the things that you're actually measuring? And you can't skip the organizational change that you need to go through in order to start using capital M metrics. That is a transformation, like going from no data to being data-informed. That's quite an important transformation. So DORA is a great option if you're just wanting to focus on, well, let's just measure something and see how our organization does when we do have instrumentation. DORA can be a little bit of a shortcut because the guesswork of what to measure is taken out.

Abi Noda: And Laura, I would just add, and that's one of the things I think DORA has done for our industry, is given rigor to the idea of measuring software development organizations and open our eyes to the fact that you can do this in a scientific way versus a rudimentary or ad hoc way. And I think that's something that is so beneficial and inspired all of us here today to approach this problem thoughtfully.

Laura Tacho: Yeah, I think DORA absolutely was revolutionary. It still continues to be. It's a collection of people who are still doing research. So it's not that DORA ended with these four metrics. For those of you who maybe don't have familiarity with DORA, or even if you do, there's probably still some stuff left to learn about DORA. So DORA is those four key metrics. The benefit of using DORA, in addition to them being standardized is that you get access to benchmarking data across the industry. So if you go to dora.dev, there's a quick check. If you have no experience with DORA metrics or you're waiting for instrumentation and automatic collection in order to know where you fall in the DORA metrics performance clusters, there's a really easy way to do it. And just go to this quick check. You're going to fill out five questions, multiple choice, and you're going to get your cluster assignment. So you'll know whether you're an elite performer, high, medium, and low.

And I think a very interesting thing about DORA is that we see it as a quote, unquote, objective metrics and automatically collected workflow metrics. But in fact, all of the research around DORA is survey-based. And maybe Abi, you can talk a little bit more about that because I think that's somewhat surprising because we often talk about DORA as being the holy grail of objective workflow metrics when in reality the very rigorous research is survey-based and self-reported.

Abi Noda: Yeah, it's funny, we work with so many new organizations that want to measure the DORA metrics and they want to measure lead time and deployment frequency. And its funny folks get into really intricate debates and sometimes arguments about how should these things actually be measured. Is it time from first commit to deploy or pull requests open to deploy or pull requests merge to deploy? And especially in times where these kind of conversations are happening, I always point out to folks, actually if you want the canonical definition of lead time, it's actually measured with a survey.

And I point to the quick check that you just shared with listeners, I think that this could be its own discussion. We've talked about doing another event where we talk about objective versus subjective versus qualitative, quantitative system survey-based data. But I think in general, from what I've seen, there's a big missed opportunity in the industry where we're spending a lot of time, money and energy instrumenting metrics like DORA metrics and sometimes that's really needed, but a lot of times, especially when you're just getting started for just looking for that initial baseline, a survey is often a much cheaper and faster way to deploy and get that data to understand if these metrics are going to be something that you want to continually measure and operationalize within your organization, at which point maybe instrumenting them and getting them in real time has a lot of advantages. So again, yeah, this is a really interesting topic you've brought up. Probably merits its own discussion and I know you have perspectives as well.

Laura Tacho: Yeah, sounds like a good part two. I want to touch, before we move on, to SPACE and kind of expand the conversation a little bit on when DORA metrics might have, as you said, diminishing returns or diminishing value. For me, that list is as you said, teams where they might be past the stage of digital transformation or adopting DevOps practices. So newer startups that have been using CI/CD tooling from day one, you're probably going to be elite just right out of the gate just by default. And then DORA can tell you that you're not degrading, but it's really not going to pave a path or shine a light on weaknesses or missing capabilities for you in the same way that an older, more traditional enterprise team who's going through this digital transformation can. And then I think another team where DORA is not super effective, and it's not just DORA, but any metrics are not super effective is when the metrics are either not linked to outcomes or the organization itself is just not prepared to do anything with the data.

I think more so with DORA because it is so accessible, there's the trope of executive driven development. We look at these four metrics and say, okay, well what are my four metrics? And we want to measure, but then we don't do anything with the measurement. And so that's a trap I would encourage all of you to not fall into. Make sure that there's a close the loop when you introduce measurements, otherwise you're going to have some potential culture issues where people think you're just sort of spying on them. There's a lot of apprehended around the metrics. So make sure you close the loop and have an appetite for change, not just an appetite for measurement.

Abi Noda: Plus one on why are you doing this? The goal is to improve. We get really hung up on just getting the data, getting the numbers and what are they telling us, but we often lose sight of the goal, and I would never make the argument that you shouldn't measure DORA metrics. That's not the advice I give to folks. I think the advice I give to folks is think very carefully about what your charter is. Right. Because some organizations make the mistake of making DORA metrics their charter when it doesn't actually align with the outcomes they're trying to drive in that moment in their business. And I think that's where the danger is not only with DORA, but with all these frameworks. If you just implement a framework, you're subscribing yourself to focusing on the things, the scope that these frameworks focus on. And for DORA and DORA has evolved, but a large part of DORA focus on that DevOps performance. And if your organization isn't necessarily focused on those capabilities, then Dora metrics might not be the right outcome measures for the work you're trying to do, so.

Laura Tacho: Yeah. So let's move into SPACE, and I'll actually frame this with Jessie, a question from you. Wouldn't you say that you need both DORA and SPACE? I don't think it's one or the other. And that's exactly right. So SPACE is a very comprehensive framework when it comes to defining developer productivity. DORA is a measure of software delivery capabilities that we often use in the industry as a proxy for developer productivity. It's hard to be productive if your organization doesn't have capabilities. So that's where that connection comes in. SPACE is a framework for defining and then picking measurements for developer productivity. So any metric we can apply to a SPACE category or we can use SPACE as a lens.

So for example, if we're thinking about, our definition of developer productivity is I feel productive when I close a lot of PRs. Number of PRs close falls into that A category, it's an activity metric. Or I feel productive when I have two hours of focus time. That is going to be an efficiency metric. You can count how many times per week do your developers have uninterrupted, coding time, how many no meeting days do you have? There's lots of different ways that you might measure that.

And so what I think SPACE is best used for are conversations and it's providing vocabulary and mental models to help you and your team figure out what does productivity mean. And then also, and this is very important, SPACE calls out gaps in your own thinking. So oftentimes I find that a lot of developers especially are heavy on A metrics, on the activity metrics. We're thinking about closing PRs, closing tickets, deployments, shipping stuff. We're talking about quality. We don't want bugs, we want to avoid incidents. And then I feel productive when I'm, have flow or opportunity for flow.

What often is missing is I'm productive when I don't feel burned out. I am productive when I'm writing documentation to make changing things easier in the future. And those are those S and C categories in SPACE that are often overlooked.

And so I think one interesting way to use SPACE is to get your team in a room, talk about what is your definition of developer productivity, and then just map them onto those SPACE dimensions and then see where your gaps are. SPACE can be really useful for that, but SPACE in and of itself is not a list of metrics to implement. And I think that's the biggest misconception and maybe a reason that a lot of you who are in this room and taking time to learn more thought, I'm not using SPACE when actually you are using SPACE because every metric can be evaluated with SPACE. Abi, I wanted to ask you, maybe you could cover a little bit about SPACE is also positioned that workflow metrics and perceptual metrics of productivity have an equal footing because that's something that's also featured very prominently in the DevEx framework and a common thread between the two of those.

Abi Noda: Yeah. I want to also just double click on that last point around the SPACE metrics. That's something we both hear so often from people, Hey, we're trying to implement the SPACE metrics and we have a podcast we did with Margaret-Anne Storey, one of the co-authors of the space framework. And she talks about this. I mean if you look at the SPACE example metrics, right, not the SPACE metrics, but the SPACE example metrics, you see some metrics in there that you might be a little surprised by like velocity points. Actually, I think commits maybe lines of code is actually, maybe not lines of code.

Laura Tacho: I think lines of code is in there. Yeah.

Abi Noda: Is in there. Yeah. So,-

Laura Tacho: I think there's like a, says, don't do this.

Abi Noda: Yeah, won't caution. Yeah.

Laura Tacho: Yeah.

Abi Noda: And we talked to Margaret-Anne Storey Peggy about this and she said, yeah, when we were writing the paper, we were all laughing about this saying this is a terrible thing you should never measure, but they wanted to build a bridge, right? Because that is a metric that a lot of people measure. And so it's really important for people to understand that the metrics in SPACE are examples to illustrate the framework. It's not a here's a framework, here are the metrics you should be using. And we see organizations getting themselves into trouble even using SPACE framework as a justification for measuring things like lines of code that we all really know we ought to not be.

Laura Tacho: Yeah.

Abi Noda: But to bridge into the DevEx framework, yeah. So I think one of the things when SPACE was written, this was in the midst of Covid, the shift to hybrid and remote or remote work, and everyone was trying to understand, well, how's this impacting our productivity? And of course what was happening was that everyone was latching on to primarily looking at output metrics like lines of code and pull requests and commits. And one of the core goals of the SPACE paper was to challenge that conventional way of thinking about productivity. And that's why in SPACE, if you go through the paper multiple times throughout the paper, right in the intro, then in several sections they talk about the fact that you can't reduce productivity to simple measures of speed and output.

And in particular, there's several sections focused on talking about the importance of gathering developers sentiment and perception productivity. And that that is a equally important way to understand and measure productivity. And of course one of the challenges with that is to actually, the concept makes sense, but to go from concept to application is really difficult. Right. That's one of the things we work on and it's really, really difficult.

And so that was one of the goals with the DevEx paper and the framework was we've been doing all this research around developer experience, which is what is developer experience, but it's very overlapping with this idea of understanding and improving productivity. And through our work and research and the developer experience, we realized that really this was really the same thing that SPACE framework was talking about. It was just through a little bit of a different lens, through the lens of the developer, which of course relies on gathering feedback and sentiment from developers to measure things.

And so when we talk about DevEx, we often talk about, look, we wanted to take the next step with what the SPACE framework was talking about. There's a little bit more of a focus and practical application of SPACE for development organizations that are thinking about developer experience, which a lot of organizations are thinking about today. And of course, it prominently features qualitative, subjective and objective data as a way of capturing data to understand productivity and experience. And we also talk about combining that with system quantitative data. And we actually talk about both of those things as being really one of the same. It's a way of measuring developer workflows, whether you're measuring it through surveys or measuring it through your systems, you're really trying to measure the same thing. You can get different perspectives and viewpoints on it, but that's what it really boils down to.

Laura Tacho: Yeah. I think to me, I will share my pizza analogy is that SPACE is like you're studying every single kind of pizza topping and pizza sauce and dough and style of making things. And DevEx is a restaurant. And when you're hungry, you go to a restaurant, you don't go to study everything, but if you want to open a restaurant, then it's really important to understand SPACE. And DORA is the same pizza that your team makes to compare your progress or that you compare from different restaurants.

And I'd like the way that that DevEx framework just makes SPACE a bit more accessible to a lot of leaders. I think SPACE is very comprehensive and when you have a good command of it, it can be incredibly helpful. But we all know that time is not really something that a lot of leaders have. And I think SPACE can sometimes seem overwhelming or not approachable because it leaves so much up to the individual, which is by design. And I think that's a good thing. But I think in practice sometimes we just need a little bit of guidance and a little bit of push in a particular direction. And DevEx does that in a balanced way. It's not too overly prescriptive, but it does kind of help push you toward the latest research and then the things from there.

Abi Noda: Yeah. And it's funny, when we were working on the DevEx framework, Margaret-Anne Storey Peggy, co-author of SPACE, she was rightfully really cautious about what we put out because she'd seen what happened with SPACE. She'd seen that a lot of good had come out of it, but there were some challenges with things like people using it to justify things like lines of code. So that's just a little behind the scenes look at it was really hard to come up with this and it's really hard to put out these frameworks and examples knowing the potential consequences. And we see challenges with the DevEx framework as well. I think one of the biggest challenges we see is that a lot of people, leaders in particular just don't really buy into the idea that you can really measure things reliably by asking people questions. Right.

Just one example is build times. Right. Everyone defaults to let's measure build times through our CI/CD pipeline data. And that's absolutely something you should do. But if you want to answer a question, like how often is a developer personally waiting for all build processes to complete for more than 30 minutes? We've actually tried to do this with system data and it's really, really tough question to answer with system data. It's not 100% when you answer that with survey data, but you can at least get there or you can at least try to answer the question. And so that's one of the challenges I think we see with DevEx framework, is just this idea that you can measure objective things with human provided responses and answers is a bit of a difficult concept.

Laura Tacho: Yeah. And I actually want to answer one question from the audience. So how do you navigate the want from business to have quantitative metrics when the best ways to handle it via DX is qualitative metrics? And I think there's a few ways to answer this question. First of all, I think there is organizational inertia that sometimes works against us, especially in engineering, which is a very numbers-based logical field where there's a hesitancy to adopt a new way of thinking that focuses more on human attitudes. I find it sometimes myself difficult because a lot of us were engineers in the past and we learned how to debug systems and we look at dashboards and telemetry and then we want to debug our teams the same way, but the reason we need all the instrumentation is because our systems can't talk to us. And if they could, be like, I have a memory leak here, just fix this. They would, and our lives would be a lot easier and fortunate for us, our teams can tell us.

Also, I think an interesting thing in there that I want to touch on is that survey-based doesn't only mean qualitative metrics. And I think this is something that's like when it clicks for you, it'll make sense. But this was something that I even had to take just a little bit of time to scratch my head about was like you can collect workflow metrics, quantitative metrics, objective metrics through a survey. Self-reported doesn't only mean qualitative about how I'm feeling. And so a huge benefit of this. So for example DORA, again, the research is based on self-reported workflow metrics. So it's much easier to say, Abi, how many times did you deploy your main application to production last week than to try to have to sift through all of my builds in my CI/CD pipeline or whatever. I can give you an answer that's much faster.

I'm giving you a quantitative answer to a question. It's just self-reported. And so oftentimes we think about scraping an API that is quantitative data and objective data, and then survey data is only qualitative data about how people are feeling when in reality self-reported data can be both quantitative and qualitative. And I want to call that out because when you're looking at what you want for your organization and thinking about collection methods, surveys are a very easy way to do this quickly, and then you can only have one method of collection. So in fact, I'm actually, I was just reading this, but Accelerate is the book that made DORA metrics put them on the map. And I think a lot of you who are using DORA metrics have probably heard of this, but chapter 14 is called Why Use The Survey?

So these ideas are not new and certainly they've been around in practiced for a long time. I think DORA can show us how rigorous a survey-based methodology can be. And so if that's a little bit of ammunition to maybe a hesitant executive team or a hesitant board or even hesitant engineering leaders who are thinking, well, surveys are just about feelings and I want hard data. It's not mutually exclusive. I think Accelerate and because DORA is put up on a pedestal as like the canon of quantitative objective metrics, I think that is a really effective way to try to change some minds and make people more open to using surveys.

Abi Noda: I would also add you're not alone in having that challenge. I know actually when we were writing the DevEx paper, Nicole and I got really deep into this specific problem, like how to frame this in a way that wasn't antagonistic. So my practical advice to the person who asked the question, like how to navigate that, is to not make it a war of, is it survey-based or system-based? Don't get in that argument of which one's better and which one's cheaper and take an and approach. Right. Except that both have advantages and disadvantages.

But I think most organizations, while aligning on the fact that both are needed and valuable, could align on the fact that most often survey-based is the most practical place to start. Even organizations that are dead set on system metrics, actually setting those up in an accurate way suited for your organization. That's something even if you use an off-the-shelf tool, it's still going to be tricky. If you go and build it out yourself, I mean, we've seen that take years. I know of companies that have spent years building out DORA dashboards. And so just in terms of time and money, surveys probably going to give you more bang for your buck when you're getting started.

Laura Tacho: Yeah. We're just about trickling up on 15 past the hour. And again, we're a nothing if not punctual, even despite the technical difficulties. I want to just cover some of the big points that we talked about and then ask that final question of what should you pick? So we talked about DORA, these are those four metrics. They're standardized. We get benchmarking data. These are great for teams that are going through digital transformation, teams that want to keep sort of a pulse on general organizational performance, maybe teams that are having trouble deciding which metrics to pick. You can go to dora.dev, quick check and check it out. A lot of DORA metrics are included kind of off the shelf in a lot of developer tooling. So I wouldn't say it's trivial to get them, but I think they're sort of pervasive and it's probably the first set of metrics that you're going to come across.

On the other end, we have SPACE which is very abstract and in fact is not a set of metrics at all. It is a framework to think about metrics, and I think SPACE is great for teams and organizations that are trying to define what is productivity, and also teams that want to challenge their own definitions of productivity by mapping their beliefs onto the SPACE framework and then seeing where the gaps are. And I think that DevEx framework is a great framework for teams that are looking to improve developer experience and developer productivity and want to use the latest research and then include the voice of the developer to use metrics to actually drive change, to make the day-to-day working experience of developers better and make them happier and more productive. And so Abi, the last question before we wrap up, which one should you pick?

Abi Noda: I think the TLDR answer is for just getting started with developer experience, developer productivity, I'm biased, but I would suggest the DevEx framework as a good place to start knowing that it overlaps heavily with SPACE and includes some of the DORA metrics as we've talked about. So you're getting a all in one package.

Laura Tacho: That's true.

Abi Noda: If you're kind of taking a step back and really want to think through this on your own and think about productivity and really want to dig into defining your own metrics and maybe even go beyond just developer productivity, I think SPACE framework, it's a must read for everybody and I think is a really useful way to think about the problem and breadth. And if you're really focused on continuous delivery, DevOps, even incident response I saw, SRE, platform capabilities. I think DORA is still really relevant today. A lot of organizations, a lot of the organizations we work with get a lot of value out of them, so. And again, I would remind people that the DORA metrics are in SPACE. Some of the DORA metrics are in DevEx, so you don't need to think of these as mutually exclusive, but that's the TLDR.

Laura Tacho: Yeah, think about why you're doing it and then figure out the metrics and the frameworks that fit. I think that's sort of the TLDR from this. This is sort of the first in a series of free talks that I'm going to be hosting on various topics that are relevant to engineering leaders. So there's another one in two weeks about the impact of AI and generative AI on developer productivity. Thank you all so much, and thanks for showing up for your teams and taking time to educate yourself on this because not everyone does that, and I think that's cool. So thanks a lot and we'll see you next time.

Abi Noda: Thanks everyone. Thank you so much for listening to this week's episode. As always, you can find detailed show notes and other content at our website, getdx.com. If you enjoyed this episode, please subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Please also consider rating our show since this helps more listeners discover our podcast. Thanks again, and I'll see you in the next episode.