Interviews with developers and API technology leaders.
Hosted by Sagar Batchu, CEO of Speakeasy.
speakeasy.com
[00:00:00] Scott Dietzen: Software in the real world, in the commercial world, has to be dramatically easier to consume. It has to be supremely reliable and secure—and easy, at least in its initial adoption.
[00:00:17] Sagar Batchu: Hey everyone, welcome to another episode of Request Response. I'm your host, Sagar Batchu, co-founder of Speakeasy. Today I'm joined by Scott Dietzen—or just Dietz—CEO of Augment Code. Scott, really wonderful to have you.
[00:00:30] Scott Dietzen: Thanks for having me. Glad to be here.
[00:00:32] Sagar Batchu: I'm going to have to remind myself through this conversation to call you Dietz. I'm going to keep switching between Scott and Dietz, so apologies in advance.
[00:00:39] Scott Dietzen: I'll answer to either.
[00:00:41] Sagar Batchu: Alright. Scott, tell us a little bit about how you got to work on Augment Code. I've been seeing Augment Code banners all over San Francisco, so very excited to be chatting.
[00:00:50] Scott Dietzen: If we go back to the early days, I actually did a PhD in machine learning a very long time ago. The technology was different, but we were trying to apply machine learning techniques—more logical systems back then—to help people build better programs. I'm a failed researcher, so I made the decision to try my hand as an entrepreneur. I did four startups with really large, complex software bases—things like application servers and storage—which led me to a deep appreciation for how supremely hard it is to craft and evolve a large, complex codebase.
[00:01:30] I was the CEO of Pure Storage most recently, and we had a successful IPO. I think we ramped faster than any storage business in history. It was a wonderful time to call it a career when that business got big enough that I no longer felt I was the best person to lead it. But when LLMs emerged, I got extremely excited that we would finally be able to help programmers do their jobs more easily. Software engineering is a very difficult discipline, and I saw this opportunity to make software engineering more fun and rewarding—and hopefully make software better.
[00:02:06] So I got to join Augment early on. And we tackled the opposite end of the spectrum from a lot of the other AI startups that have tried to catch the vibe-coding wave. Think of us as the "anti-vibe" company. We're trying to help software engineers who are managing tens of millions of lines and hundreds of thousands of files—not startup projects. We're trying to help them do their jobs better. And that’s a much taller order.
[00:02:35] Sagar Batchu: That’s a fascinating position. I think, as someone who uses vibe-coding tools every day and watches my company adopt them—we're a team of 30% engineers and rapidly growing—it sounds like you're really talking about big, massive enterprises and big, mission-critical codebases. Is that fair to say in terms of your customers and the kinds of projects you support?
[00:02:57] Scott Dietzen: Yes. We work with some really large codebases, like from storage companies. DataDirect Networks, one of the leading AI storage companies, is using our AI to develop its software. We're popular in networking, databases, security companies, financial services—even very large consumer web applications are using the technology.
[00:03:20] Sometimes our users have started with other products. We have about 1,500 Cursor developers per day. They often graduate into Augment when their project reaches a particular scale or level of sophistication that makes Cursor no longer sufficient. Similarly for Copilot. So we're very happy to take that high end of the market.
[00:03:41] But I should say the product works great for vibe-coding too. We do have users who find our agent to be the most sophisticated on the market and choose to use us even in the early days of their projects—believing they can stick with us for the long haul.
[00:03:55] Sagar Batchu: That's really interesting. As I’ve been adopting these technologies myself, I've noticed that with every tool out there—and to be fair, I haven't tried Augment Code yet—I often start out as a basic user and then become a power user. As I grow into that role, I find myself having to bring in traditional software tooling—everything from simple bash scripts to managing my Git working tree locally.
[00:04:20] I’ve even started to wonder: would a business built around cloud code make sense? One that provides shared context, scales with large codebases, and has more memory. So what you're building makes so much sense to me. I’m sure there are a ton of enterprises struggling to get the most out of—not just vibe coding—but agentic IDE experiences because the tools aren’t built for scale. Sounds like you’ve landed on something really important: bringing ease of use to developers everywhere, not just startup folks in Silicon Valley.
[00:04:53] Scott Dietzen: I think you’ve hit on the problem exactly. Context is ultimately what constrains these applications—especially agents. As we've moved up the stack from simple completions and chat into agents—which make up the majority of our use cases today—agents need to be more self-sufficient. And to do that, they need to understand the codebase they're working on. Context has only gotten more valuable.
[00:05:22] We spent the first two and a half years of the company trying to figure this out: how can we select the optimal sub-context to pass to a large language model so it understands what it needs to know for the given task? Because passing an entire codebase—tens of millions of lines—as context is ludicrous. First, it’s far too expensive. And second, AI models aren't that much better than humans at memorizing massive inputs. So even if you feed them everything, they won’t necessarily remember or understand the important parts for the task at hand.
[00:05:56] So we’ve solved this with a real-time semantic map we build and a context engine that selects exactly the right pieces needed.
[00:06:05] Other approaches—like fine-tuning—are far too expensive. Customers also worry about leaking intellectual property into models. We solve that with our context engine and real-time semantic mapping.
[00:06:20] Sagar Batchu: I love that. We’ve seen the world shift from developers thinking about prompt engineering to focusing more on context engineering.
[00:06:29] Context is now the true limited resource. You can abstract away compute and GPUs. You can even assume an enormous model and infinite data—but the context you can provide in each interaction remains finite.
[00:06:43] I’d love to get your thoughts on the recent developments around tool calling. We’ve started helping other companies manage tool calling via our MCP—Model Context Protocol—and the limiting factor is indeed how well we can manage context in MCP servers. When interacting with APIs, there’s a flood of data entering the LLM’s context—and most APIs aren’t designed to handle that kind of information efficiently. They assume the integrating system handles context—which doesn’t hold with LLMs.
[00:07:15] So, we’re seeing context become the most critical resource our customers need help managing.
[00:07:21] Scott Dietzen: We're tremendously excited about MCP. We’re seeing increased usage across the product.
[00:07:28] The initial use cases were simple—like having the agent pull a Linear or Jira ticket, or getting context from Slack, Glean, or Notion to complete a task.
[00:07:38] But now we’re seeing ISVs using MCP to package up relevant documentation and code samples so the AI can gain proficiency with a codebase. For example, Augment users developing Redis, Mongo, or Stripe apps benefit if we can shorten the distance from idea to implementation.
[00:08:00] MCP helps deliver the right context to enable that—and it’s valuable not only for Augment users but also for the companies building and running those MCP servers.
[00:08:10] That said, one risk with AI is that people get lazy about APIs. LLMs are like humans—they work better with really good APIs. So API design arguably matters even more in this new AI-enabled world.
[00:08:26] I worry developers will get lazy and over-rely on the AI rather than properly reviewing the results and focusing on crafting optimal APIs.
[00:08:36] Sagar Batchu: You said it well. This idea of getting lazy with APIs—I'm already seeing it as browser agents and computer use become ways to interact with systems that lack proper APIs.
[00:08:48] There are many more websites and applications than APIs out there. It's easy to spin up a headless browser and let an AI navigate. That unlocks a lot.
[00:08:57] But I agree—over time, if we’re not careful, the quality and reliability of integrations will suffer. Much of our infrastructure is built on highly fault-tolerant, mission-critical APIs that run at scale. We can’t lose that.
[00:09:10] That’s why I view MCP as a great enabler. It can sit as a layer above an API and make it usable by anyone—from developers to non-engineers—while preserving the API’s structure and integrity underneath.
[00:09:26] Scott Dietzen: One thing we’ve noticed is that developers are moving toward a more meta-programming model with these AIs. Instead of manually manipulating the code, they talk to the agent and let it make the changes.
[00:09:38] But if you’re not careful, and you don’t verify what the AI outputs, you can end up with subpar design. Especially with APIs, this can be dangerous.
[00:09:48] Good APIs need to have longevity. They must be clean, simple—but no simpler. You want many times more lines of code calling an API than lines underneath it. That’s what makes APIs so powerful—they can last years and become the foundation for entire systems.
[00:10:06] A well-designed API leads to better overall architecture and more flexibility for the provider.
[00:10:12] We're going to see much more specialization in software—more APIs and more API consumption—but we must not forget the importance of strong API design. Developers have to take ownership and not just blindly trust what the AI suggests.
[00:10:27] Sagar Batchu: Absolutely. On that note, are you seeing any changes in developer habits—like people spending more time on tests and code quality vs the initial implementation?
[00:10:38] In our company, we’ve seen testing and guidelines set up first, even before the code. Things like SSA rules, Cloud.md, or test suites. Then we unleash the agents on the code.
[00:10:49] You have a great vantage point here—what trends are you seeing?
[00:10:53] Scott Dietzen: Yes, testing is definitely getting better. Developers hate writing tests. They're optimistic, especially about their own code. That optimism often leads them to deprioritize tests or write weak documentation.
[00:11:06] Agents have helped improve this because they generate higher-quality test cases and highlight issues that developers might miss.
[00:11:14] This trend isn’t limited to new code. We’ve had people take legacy systems—ones that were basically frozen because no one wanted to touch them—and use AI to build out comprehensive tests. That frees you up to modernize or refactor those systems with confidence that you're not breaking external dependencies.
[00:11:34] So yes, test and code quality are improving—especially unit tests and simpler forms. System-level testing still requires human creativity, but LLMs are getting better.
[00:11:48] Overall, I’m hopeful that LLMs are going to help us build better software—simpler to use, more robust, more connected, and with less tech debt.
[00:11:58] There was over $2.5 trillion in economic loss to the U.S. economy last year due to software failures. If AI can help us eliminate tech debt, we can start to reverse that.
[00:12:11] Sagar Batchu: What a world we could live in—no tech debt backlogs! Every technical leader's nightmare is deciding whether or not to prioritize tech debt.
[00:12:20] Your vision is an inspiring one. If we get there, it'll be truly game-changing.
[00:12:26] Scott Dietzen: Every product has a long list of missing features and improvements to be made. And there’s so much software the world would love to have—but can’t afford to build.
[00:12:36] Reducing the barriers to delivering reliable software could unlock massive creativity and human value.
[00:12:44] That’s the optimistic vision I see for developer AIs.
[00:12:47] Sagar Batchu: You’re describing what I’d call backlog zero—the holy grail for every software organization. Most of us can't reach inbox zero, but backlog zero would be even more powerful.
[00:12:58] The funny thing is, if we ever got to backlog zero, we’d just invent more backlog. The limiting factor won’t be coding—it’ll be the human time needed to shepherd and review all that work.
[00:13:11] On that note, Scott, one of the final things I wanted to ask is—what are some of the most influential products or technologies you've used that shaped your thinking on great developer experience?
[00:13:20] Scott Dietzen: My first job out of college was building a distributed systems platform. We had all the bells and whistles—on paper, it looked amazing. But it was complicated to use and buggy.
[00:13:34] That experience taught me you can’t just build for the whiteboard. Software needs to be dramatically easier to consume—reliable, secure, and simple at first. More complexity can come later—but initial adoption needs to feel easy.
[00:13:51] Great API design comes from this mindset. You want to make developers feel empowered—like they’re in control. And you want things to be intuitive and productive from day one.
[00:14:01] That’s a lesson I’ve taken with me to every company I’ve worked at—and AI isn’t going to change that.
[00:14:07] There’s this narrative that AI will replace software engineers. But in all our customer experiences, the best outcomes come from engineers leading teams of agents—acting like a tech lead guiding bots to achieve goals.
[00:14:22] But it’s still up to the engineer to referee, review, and steer the direction. The AI doesn’t know when your architecture should become microservices or when it’s time to upgrade a library. The human still has to make those calls.
[00:14:37] It’s this merging of human and machine intelligence that will get us to the software promised land.
[00:14:42] Sagar Batchu: Thank you for that, Scott. That’s a fantastic vision.
One final, lighter question: in this future where I shepherd agents as a tech lead, will I be judged on how little money they spend getting their work done? It's an interesting rabbit hole to ponder.
Thank you again for joining us on Request Response. You're doing amazing work at Augment Code. I'm looking forward to trying it out.
For those listening, if people want to connect with you or learn more, what's the best place?
[00:15:11] Scott Dietzen: The product is designed for software engineers. They can install it directly inside their IDE. It works with VS Code and JetBrains tools like IntelliJ and PyCharm. It even works with Vim—although you don't get the full agent functionality there yet.
We're launching other interfaces soon—our CLI version will be out before this podcast is released, and web interfaces are on the way.
Our key differentiators are context—what we believe is the best-in-class context engine—strong security (we were first to market with SOC and ISO compliance), and full compatibility with existing IDE ecosystems. We don't fork VS Code. We integrate natively and securely.
Check us out at augmentcode.com.
[00:15:56] Sagar Batchu: You heard him—augmentcode.com. Scott, thanks again for your time today. We'd love to bring you back in the future to continue the conversation.
[00:16:05] Scott Dietzen: Sagar, thank you so much for having me. We're grateful for the time and look forward to continuing the discussion.