[00:00:00] Sagar Batchu: Hey everyone. Welcome to another episode of Request Response. I'm your host, Sagar, CEO and co-founder of Speakeasy. Today we are joined by John Kodumal, co-founder and former CTO of LaunchDarkly. John, it's great to have you.
[00:00:14] John Kodumal: Hey, Sagar. Thank you so much. Really excited to be here.
[00:00:17] Sagar Batchu: Just to kick things off. I think a lot of us know LaunchDarkly, but just in your own words, I'd love to hear how you describe LaunchDarkly and what your role at the company was.
[00:00:28] John Kodumal: Yeah, I'm one of the co-founders of LaunchDarkly. I started the company over 10 years ago with my co-founder Edith Harbaugh. LaunchDarkly is a company that's over 10 years old. We're over 100 million in revenue. We have 5,000 plus customers. It's a developer-focused tool and I'll describe what it does versus how it does it.
[00:00:51] So what it does is fundamentally, I think the power of LaunchDarkly is that it takes two steps in the software development lifecycle: deployment and release, which were historically thought of as one thing—you deployed your software and everyone was exposed to that new version of the software.
[00:01:07] And what LaunchDarkly did that was so powerful was that we split that up into 2 stages. We said, you can deploy your software, but that doesn't have to be the same as releasing it. And then we gave you these extremely fine-grained controls to choose how you release that software. And the benefits of that are things like:
[00:01:24] Being able to roll out a new change to a small percentage of your user base to evaluate its quality, to evaluate whether it has any kind of negative impact on your infrastructure, to experimentation, to controlled rollouts of beta software. It even enables things like trunk-based development for teams that
[00:01:43] want to actually just merge code to mainline as quickly as possible and avoid doing branching. The mechanism that it uses to accomplish this is feature flags. And I start with what it does and why it's impactful before how, because a lot of people look at just that idea of feature flags, and they think that it's small, and they're shocked that there's a multi-billion dollar company that does feature flags.
[00:02:06] But it's not about feature flags. It's about all the changes to the software development lifecycle that feature flags enable.
[00:02:13] Sagar Batchu: Thanks for that. Yeah. Something that rang true with me is this idea of separating deployment and release. I love that framing. It makes punching that big red button a lot less scary for us as developers and more natural. So yeah, I love that framing. I think I'm curious before you guys started, what were people doing?
[00:02:31] Was it just simply that deployment and release were coupled as one?
[00:02:36] John Kodumal: I think so for almost all companies that was how they did it. I was at Atlassian prior to starting LaunchDarkly and we ran into scenarios on the projects that I was on where we desperately wanted this capability, the ability to separate deployment from release and back then, pre-LaunchDarkly people had heard about this system.
[00:02:56] There was a talk in 2009 from some of the architects at Flickr. There was one of the seminal talks on DevOps, where they talked about the feature flagging infrastructure at Flickr back in the day. And we were envious of that. We wanted that capability. But back then, we had two options if we wanted to do what we internally at Atlassian called a dark launch, hence LaunchDarkly.
[00:03:17] We had two options. There was this homegrown tool, which was extremely challenging to use, very stack specific. It worked for the Java stack and nothing else. It had almost no UI and a lot of the use of it involved modifying a database directly. It wasn't exactly a tool where a PM could say "I want to release this to 10 percent of the user base that are the most active users of the Atlassian marketplace," which was the product that I worked on.
[00:03:43] The other capability that we had that sort of quasi-approximated this was a front-end experimentation tool, Optimizely, and there were scenarios where we were abusing Optimizely to try to mimic some of these effects. "I want a certain thing to happen in the UI, so I'm going to use Optimizely and run an experiment rolled out to 100 percent of my users."
[00:04:04] We did that frequently, but it was full of all sorts of challenges because technologically, it was very different than a feature flag. Optimizely front-end experimentation was basically like JavaScript injection as a service. It basically just delivered—you had a UI that basically built out jQuery and allowed you to update DOM elements on the fly.
[00:04:25] And so you got all sorts of negative effects of that. It wasn't baked into the code. It wasn't baked into the dev loop. You got a flash of unstyled content because the page would load, and then the Optimizely stuff would layer in on top of that. And so the pages would flash. So all sorts of reasons why that really wasn't the best way to do this.
[00:04:43] But I think for most companies that was the world they were in. There were a couple examples of very forward-thinking companies that had really built out frameworks for this. And then you had a really poor homegrown, which was most commonly what we'd see when we came into prospects.
[00:04:58] If they had anything at all, it was a homegrown system that wasn't very sophisticated or abusive front-end experimentation frameworks, which even back then—this was like circa 2012 or so—not very many people were even using front-end experimentation at that point.
[00:05:14] Sagar Batchu: For one, that sounds like you all went on a journey to get to this point. I can definitely say that from my own experience, building Speakeasy, one of the first things that we did when we started to ship product was figure out what was going to be our feature flagging and release methodology, so to speak.
[00:05:30] So it's very clear you all have really impacted the SDLC and really changed how we think about it. So it's pretty awesome. On that note, when I was starting Speakeasy and really thinking about the API space and how we can impact it, LaunchDarkly was actually one of the companies that I looked at because you had an amazing suite of SDKs and kind of developer experience on the API, which traditionally isn't the case with most companies.
[00:05:56] And, for those of you listening who don't know, Speakeasy is essentially a modern toolchain for SDK development. And one of the things that we do is make it really easy to build and launch SDKs. So I'm curious to hear from John, like the role that the API played for you and why you invested so much in SDK development which, to me, seemed ahead of the curve.
[00:06:17] John Kodumal: Yeah, that actually was—I think when we started LaunchDarkly, again I mentioned these front-end experimentation frameworks, and when we were trying to get investors to be interested in us, a lot of them didn't understand the difference between what we were claiming we were going to do, because we had a very early product and it didn't show off a lot of what LaunchDarkly is capable of now, obviously, but it showed some of the essential capabilities.
[00:06:43] But people really didn't get why we were different, why we're not just one of these experimentation frameworks, or why our focus initially wasn't on experimentation. And I think for us, it was all about understanding the software development lifecycle and understanding the role LaunchDarkly played in the software development lifecycle.
[00:07:02] And I talked about how, well, fundamentally we changed the software development lifecycle. We separated deployment from release, but that also meant that we needed to embed ourselves into people's software development lifecycles. And I think my entire career I've been fortunate enough to be in roles where I got to observe the SDLC at different size companies.
[00:07:22] My first job was at a company called Klocwork, which was commercializing this very sophisticated static analysis, and I was customer-facing a lot of times. So I'd go out and I would observe how some of these companies would build software from financial institutions in Japan, I got to visit Japan a few times, to like BART.
[00:07:39] To Fortune 500 companies, everything—I got to see how their software development lifecycles worked. What I realized is that they're all different. Everybody builds software a little bit differently. They're all trying to accomplish the same things, roughly speaking, right?
[00:07:54] They're trying to empower their teams to be fast, deliver high-quality code and deliver it with precision according to some set of specifications. And I realized in order to fit into someone's SDLC, you have to respect the differences that make their software development practices different.
[00:08:10] And it's impossible to do that in a product in a one-size-fits-all way that you just build this one thing that works for everybody out of the box. You're never going to accomplish that. Instead, what you needed to do is a little bit of both. You needed them to adapt their SDLC a little bit to your tool, your philosophy and how to build, but you also had to work with them.
[00:08:33] You had to adapt and mold your product to map to meet their development lifecycle, and that, going full circle, meant to us that we had to build in an API-first driven manner, like everything in the product had to be driven via API, and that was our philosophy from day one. It was something that I took from Atlassian, like the power of APIs.
[00:09:00] The first thing that I built at Atlassian was basically an API. It was an API for the plugin system, so it allowed 3rd parties to build software integrations into Jira and so forth. It was like some of the technology framework that allowed that to happen. And we built an API for installing those into Jira and all the other Atlassian products, and that API that we built.
[00:09:24] I spent months just specking it out, making it—I read the Fielding thesis on RESTful API design, and I was like, "Everything is going to be beautiful in this API," and to the best of our ability, we did that. We invested massively in this API as a team, and very few people really understood why we were focusing so hard on that, but 10 or 15 years later, of all the code that I've written at Atlassian, that API is still there. It's solid, rock-solid foundation stuff. And so getting APIs right is something I learned back at Atlassian, something that I took with me, and something that I built into LaunchDarkly from day one, and that was such a boon for us to have invested in all of the things that make an API great from the earliest stages.
[00:10:13] And then just instilling this philosophy of "if you didn't build an API for that, you can't ship it because it's not done yet."
[00:10:18] Sagar Batchu: Yeah, that's fascinating. I think this idea that when you, especially with what you guys were doing a lot historically, like when you build software that other people rely on, you haven't just embedded inside their own products. That's something that really resonates with me. I think
[00:10:33] when you build and launch an SDK, like that's exactly what you're doing. I'm curious, did you all measure metrics around success? I hear customers talk about everything from time to 200, so like time to first successful API call to like time to second successful API call and so on and so forth.
[00:10:52] Whether given your kind of unique place in the SDLC, was there particular metrics you were looking at?
[00:10:57] John Kodumal: Yeah, I think that's—first of all, it's an important distinction between the SDKs and the APIs, and we treated them differently. Like the SDKs, they didn't actually use the public API. In fact, they use this very funky streaming API using server-sent events. And so we had a different set of metrics for those, because as you point out, we viewed those SDKs as being like part of our customers' runtime stack, and therefore very precious. And so we measured key metrics for those. In terms of the core metrics, we had these dashboarded up, and we actually had—I, at the highest level, had a deck that I reviewed every quarter
[00:11:39] with initialization rates for the SSE APIs, initialization rate and time to update, and that was like the message delivery end-to-end. If you push a change on your feature flag dashboard, you decide to increase the rollout, what is the time required before that hits the SDKs? And the other thing that was a big challenge with those streaming APIs was like consistency.
[00:12:02] The way our SDKs work—if you had, let's say you had 500 servers, your service was scaled up to 500 different servers and was auto-scaling. If there was a feature flag change, you'd have 500 copies of our SDK connected to our SSE API.
[00:12:19] We needed that message to hit all 500 servers, like guaranteed pretty much. Otherwise, you ended up in this weird state where a customer would load a page, they'd receive some set of features, and then on reload, if they hit a different backend server, they'd get an inconsistent experience. And so in some way, we viewed ourselves as this giant real-time distributed database for these tiny little JSON payloads, right?
[00:12:44] That's effectively what the technology was that we built. Those are the key metrics. It's like initialization time and update latency for the streaming services. On the other side of things for our APIs, we viewed those as mission-critical too. They were a little bit less mission-critical because generally speaking, if you were using our public API, then you were building some kind of integration with LaunchDarkly that you were using within your software development lifecycle, like we had customers that would build test suites that would flip a ton of feature flags and test pieces of functionality.
[00:13:16] Again, that wasn't necessarily what we recommended doing when it came down to testing with feature flags, but if that was what the customer wanted in their SDLC, we wanted to be able to support that. And so for that, we did look at like different endpoints and we looked at P99s and P95s for those, mostly latency, things like that.
[00:13:34] The other issue that was really challenging for us on those things is it was hard to predict some of the ways that our APIs were being called. And so rate limiting was like a big challenge that we had to get right. The other thing I learned about rate limiting is that it's hard to get people to actually respect limits
[00:13:50] because you put these header values and you're like, "This is when you should retry," and they're like, "I'm not going to program to that. I'm just going to slam you over and over again until you respond properly." So that was a really big challenge for us with APIs.
[00:14:04] Sagar Batchu: I think what's really interesting there is that I had no idea you all were built on SSE, which is, I think, very topical all of a sudden today with all of these chat and LLM applications using SSE as the primary transport method. It's really interesting too, because it isn't actually describable
[00:14:22] natively on the OpenAPI spec, like there isn't a great methodology around it. We've modeled it on the OpenAPI spec here at Speakeasy for a lot of our customers, but it doesn't actually exist in the specification itself. And so as a result, there's not a lot of guidance on what does great look like for an SSE API?
[00:14:40] What are the metrics you should be measuring for success? So clearly LaunchDarkly was like way ahead of its time using this thing that just, I feel, is having its moment today, thanks to ChatGPT and all the kind of LLM-based apps.
[00:14:53] John Kodumal: Yeah, it's funny you mentioned that because we started using SSE very heavily in about 2016. And one of the criteria that we had for deciding to adopt the technology is like, how niche is this? Who else is using this? And back then, no one was using it. And the reason I knew no one was using it
[00:15:16] was because all the open source libraries were like horrendously buggy. They did not work at all. So we ended up forking most of them. And I think we made some good contributions back. I couldn't tell you whether or not the open source libraries for SSE are actually the ones that we contributed back to at this point.
[00:15:36] But we had to build SSE clients. They either were very buggy and we'd fork them and fix them and contribute back, or they didn't exist because we had SDKs for Haskell—everything you can think of, we had an SDK for it. And in some of these scenarios, we were just building
[00:15:53] SSE libraries basically from scratch. It's not that complex, but it still gives you pause when there are no open source libraries out there, or they just don't work.
[00:16:04] Sagar Batchu: Yeah. Totally. I guess fortune favors the bold in this case for you guys. It was pretty amazing that you did that. I think I'll have to follow up and send you a thread. We've been trying to push the OpenAPI specification to adopt modeling as the kind of standard methodology on the spec, and it's taking its time, obviously, to get adopted.
[00:16:22] So I'll have to send that to you.
[00:16:25] John Kodumal: Yeah, I can check that.
[00:16:26] Sagar Batchu: Yeah. Given that we're talking about, and obviously, lots of craziness going on in the space right now with respect to APIs, LaunchDarkly as a provider and for you on the outside now, are you seeing shifts in how consumers are actually consuming these APIs and how they're demanding tools and the various points of integration around it?
[00:16:47] John Kodumal: Yeah, absolutely. I think the first thing that I'll say is quite basic and not massively initially seeming to be connected to AI, but people are demanding a lot more sophistication from APIs, I would say, right? I think that is because of AI, right? Because at the end of the day, it means that
[00:17:04] basically, to me, AI as it relates to APIs—the first impact that's obvious is that the barrier to build an integration or some kind of application has been lowered massively. And the language that is most natural to speak is API, right? Especially when connecting to different third-party services.
[00:17:24] So that means that there's gonna be more API adoption, and that means that it matters more and more that there's some standardization to API. OpenAPI—I've seen massive pickup in traction for OpenAPI specs. We had an OpenAPI spec for our entire public API in, I don't know, 2017, 2018, and that was like rare and a differentiator. And now it's pretty standard, which is great. I think that's a great move forward. But it also means that the quality of your APIs, not just the specification—the expectations have improved, increased, I should say. People expect
[00:18:03] things that were like, you didn't necessarily have in the first part of your API, right? Pagination had better be there and it had better be like, done in a reasonable way. Like restful design patterns, that's great. Caching, rate limiting with reasonable retry policies—
[00:18:18] those are things that nobody had in public APIs 7, 8 years ago, and now I feel like you have to. It's table stakes. And there was this period of time where the developers, even at LaunchDarkly, would be like, "That's a lot of overhead." The quality that is required of a good public API has now increased,
[00:18:40] and that means if I'm building in an API-first way, "Man, you're making me do a lot of work. That's going to increase the cost of every feature significantly. Is this really worth it?" And once you assure people that, yes, it is absolutely worth it, then you're like, "Okay, how do I reduce the cost?
[00:18:56] How do I make it the case that I can build in an API-first driven manner without paying this tax over and over again for everything that I've built?"
[00:19:06] Sagar Batchu: Yeah, no, totally. The bar has been raised tremendously. And even in the two years of our existence, we're seeing customers come to us now with pagination with SSE with retry, idempotency, all this stuff built onto the spec, which is really fantastic. And I think it's even more awesome because
[00:19:24] with all of these autonomous agents and stuff around the corner, I'm starting to see people think of the API spec or just your API definition as something similar to a sitemap, right? In the days of web crawlers going crazy and people start putting like a sitemap or some kind of TXT file where people could scrape what a site does—very similarly, I can see
[00:19:45] an API definition becoming this public spec, not just on an API that a human integrator would use, a developer, but also just on a site. And so you have agents come in and look for it, look for all of the models you have, whether it's paginated. Yeah, and as you said, it just means the table stakes are higher for what a good API is.
[00:20:05] John Kodumal: In a lot of ways, wasn't that the point of REST, right? That was what Fielding's thesis was all about. It was like representational state transfer. It's like a state machine. Your API is a traversable map.
[00:20:16] I had this person on one of my old teams who was like this REST zealot, and he was like, "Stop calling them endpoints. Nothing should be an endpoint. Everything should be traversable. If you have an endpoint, it's broken." And I was like, "I appreciate the pedantry, but I think you're right."
[00:20:32] I think you're right. That's how REST—that's how APIs should work. And that's how they need to work in a world where machines are trying to automatically generate clients, and we're trying to automatically construct meaningful applications on top of these APIs.
[00:20:51] They need to be traversable by a machine, not just a human, such that you can infer its behaviors and those behaviors should be predictable. It reminds me of back in grad school, if I could go off on a—this is a mild—
[00:21:04] Sagar Batchu: Please.
[00:21:05] John Kodumal: There's this guy, part of his—this guy on my team, and he was working on this code inference system,
[00:21:11] and it was basically: if I have X, and I want a Y, and it's all type-based. So it's: I've got a string, and I need to generate whatever, like a JSON object, it would from the types, figure out how to plumb through different function calls to basically form the transitive closure across the types of the functions and just stitch things together and infer a set of calls.
[00:21:32] This was like heavily into the Java Beans enterprise architecture days where you would try to do this stuff, but APIs should kind of work the same way. It's: I have this, and how do I get over there? Follow the types or follow the specs, right? Follow the OpenAPI specs.
[00:21:46] And it should be obvious how to do these things, and the degree to which your API is able to adequately, first of all, be constructed in a way where idempotency is there, and the OpenAPI spec is sensible, and pagination works in a reasonable way uniformly across all resources, then you're making an AI's job easier.
[00:22:07] You're making, therefore, your customer's job easier because they can then build with extremely low cost, these amazing interfaces, these amazing Lego kits on top of the product that you sell.
[00:22:20] Sagar Batchu: Totally. I love this framing of traversability. I think we think about it today as traversing from the API definition into your code and into a Python codebase or Java codebase, TypeScript codebase. And now you have that traversability problem that explodes when you have agents who are making—who often operate on a slightly more trial and error basis, right?
[00:22:42] Make multiple calls, often doing discoverability alongside with the API call. They're not looking at docs and figuring out what to do. It's often making the call and see if it works. I really love that framing. I think on that note, I think traversability is a good way to also think about developer experience, right?
[00:22:59] If you can go from someone's products into your application into what you actually want to build really easily, that's great.
[00:23:06] John Kodumal: Just—
[00:23:09] Sagar Batchu: Great hallmarks of developer experience for you? Are there like any products that stand out? LaunchDarkly, of course, up there,
[00:23:15] but outside of that, things that have influenced the way you think about this?
[00:23:24] John Kodumal: It exists on multiple scales, right? On the macro scale, good DevX is like a developer tool that understands how it fits into the lifecycle of the software development lifecycle at a macro scale. I brought up the Optimizely example earlier, right?
[00:23:42] And I don't mean this to throw shade at Optimizely in any way. It was a tool built for a different thing, but part of the reason we had good DevX and using something like a front-end experimentation framework for feature flags—part of the reason that DevEx was bad was because it didn't fit into the software development lifecycle, right?
[00:23:58] Like it didn't understand that staging environments existed. And therefore, you couldn't run an experiment on staging and then promote the experiment to production. That concept didn't exist. So at a macro scale, it's like understanding how I fit into this complex world, the software development lifecycle and the workflow of developers at that macro scale.
[00:24:16] On a micro scale, I think it's about—I like this traversability thing, because you talk about bots or agents or whatever, just traversing this graph and trying to guess at the behavior of the system. I think when you're building a good product, a good developer experience,
[00:24:34] you have to anticipate what the end user is attempting to do with your software, but even more than that, it's knowing that you're not going to be able to predict what they're going to do and giving them the escape hatches they need. Simple example: knowing that developers script things, right?
[00:24:48] They'll take aspects of your product, and they'll build their own tooling around it. And that may be API-driven, or it may not be, right? Maybe they have a workflow where they load up a page in your product and they copy a value out of a specific field. And then dump that into a command line script, right?
[00:25:05] They're not scraping. They're just—the initial step happens to be manual. Good developer experience: make it easy to copy that value. Bad developer experience: make it impossible to select that value and hit Control+C. And it's just one of those things where you should assume that every developer is a tinkerer.
[00:25:23] And you're not going to be able to anticipate all their needs. Therefore, they're going to build extensions to what you did to meet their own needs. I think of great developer experiences as those things. Yeah, there's a lot of tools out there that I think of as having great developer experience.
[00:25:36] Sometimes when I'm evaluating things—I do some angel investing and things like that—I'll just—I'll think, "Do these people know what they're doing?" and I'll play around and just see, "Did they get the little things?" And if so, I'll wager that they really understand
[00:25:50] how to build a great developer-focused product. Yeah, I don't know. That's it. I won't mention specific products, but there are a few that I've come in and been like, "Oh, okay. They know what they're doing."
[00:26:03] Sagar Batchu: Yeah, totally. The small things matter, right? And I think that's something I think anyone who thinks about developer expense would probably agree on. On that note, John, thanks so much for your time today. Come to the end of this awesome chat. If people want to get in touch with you or stay up to date with what you are working on, is there somewhere they should go to?
[00:26:21] Somewhere they can reach out?
[00:26:23] John Kodumal: Yeah, that's—I bailed on all forms of social media. You're gonna have to—yeah, you have to email me or write me a letter like a physical letter. And those are great ways to reach me. No, email is great. And I have a very guessable email address. So I'll let people figure that out.
[00:26:39] If you really want to get in touch with me, probably the first or second thing you try will work. So go for it.
[00:26:44] Sagar Batchu: I love that. That's awesome. Cool. John, thanks so much and hope to have you on the podcast again soon.
[00:26:51] John Kodumal: Thank you so much. Appreciate it.