Screaming in the Cloud with Corey Quinn features conversations with domain experts in the world of Cloud Computing. Topics discussed include AWS, GCP, Azure, Oracle Cloud, and the "why" behind how businesses are coming to think about the Cloud.
David: It's DevOps is time to shine because with the advent of Agentic ai, coding, everything, people are just getting a lot more done themselves.
Corey: Welcome to Screaming in the Cloud. I'm Cory Quinn. At my guest today is David Yek, who's a senior principal engineer over at AWS. David, thank you for joining me.
David: Oh, it's, I'm so excited to be here. Thanks for having me on. Longtime fan.
Corey: This episode is sponsored in part by my day job Duck. Bill, do you have a horrifying AWS bill?
That can mean a lot of things. Predicting what it's going to be, determining what it should be, negotiating your next long-term contract with AWS, or just figuring out why it increasingly resembles. Phone number, but nobody seems to quite know why that is. To learn more, visit duck bill hq.com. Remember, you can't duck the duck bill.
Bill, which my
David: CEO reliably informs me is absolutely not
Corey: our slogan. Well, thank you. I appreciate that. Most people don't admit to that in recorded media, so I, I've taken that and putting it on the website. It'll be great. Uh, you are a lead advisor on the AG Agentic AI team at AWS and as recently reported by the New York Times, a trim man of 42 with a gray beard, which does not seem to be in evidence and a jittery intensity.
David: Well, that will be on evidence. I think I've had plenty of coffee this morning.
Corey: So what are you doing these days? What are you up to? You've been at a WSA very long time and a agentic AI has well not been a thing for nearly that long. What's your backstory?
David: Well, yeah, I've spent the last around 20 years at Amazon and most of that a AWS, like there's kind of a gray area on exactly when I started in AWS because.
So much of that was on teams straddling both AWS and the rest of Amazon that made like the web service framework that all the AWS services and Amazon services use. So, I don't know, I'd say the whole time that I've been at Amazon outta college for 20 years has been with a pretty much a singular purpose, and that's been to make developers lives easier.
Everything I've done seems to be on the, the same theme of I build something. I see what was painful about that and then. I build a thing to make that thing less painful.
Corey: When you say you're talking about making a developer's life easier, are you talking about developer experience? Are you talking about underlying capabilities of the platform?
Because making developer's lives easier can cover an awful lot of ground.
David: When I look at the most pain that I have, at least the way that I look at developer pain. It's around the ongoing maintenance and operations of a thing. Like that's just, I shouldn't say, well, yeah, pain, it's just the mo Most amount of work that I'd rather not be doing is around ongoing operations.
And as a developer, as a software developer, I like to automate my way out of that. Like that's, that's actually why we do, and always, from my view, have always done at Amazon DevOps. To, to was, what that's meant has been developers do the operations. There is no separate thing. It's just one thing.
Developers wear all the hats, just do everything. And so the nice property of that is that I've been building my way out of and automating a way, the annoying stuff the whole time. When I started, I was on this team that ran Amazon dot com's, web server fleets. Arguably not a DevOps team because we ran the web server fleet, but when you have hundreds of teams pushing code to the same web server environment, somebody about, you can't be a
Corey: DevOps team when you're working with that much Pearl, I'm pretty sure it's against the, uh, the byline somewhere.
David: Yeah. Yeah. A lot of pearl to do the, the, to script, the automation. A lot of pearl rendering the web, the HT ML. Yeah. A lot of that. So in order to, to make operating that web server fleet easier. Had to build systems to help us automate it. So we built like alarm aggregation systems to turn instead of our pager like I have here, uh, going off like, uh, a hundred times one for every web server or a thousand times as we grew, uh, to turn that into just one page.
So, but that takes, that takes time. Well, now you're an
Corey: Amazonian. I mean, I assume you carry six pagers.
David: Uh, just one, but I make sure that I, well, I guess I, no, I shouldn't say that because it's my pager, it's my phone to make sure that as a backstop does the text message and of course, the app, uh, that we have, so.
Okay. Okay. Good point. I have three pagers.
Corey: There we go. You'll get up to a full six pagers before appendices app later. It'll be great. Your timing on this is impeccable because as we record this, about an hour ago, you folks came out with the general availability of AWS DevOps. So now you can talk about it freely, but it wasn't enough time for me to actually kick the tires on it and ask the embarrassing.
Well, what about questions? So tell me, that was all planned. I, I assume it was. I basically, me talking to people dictates the entirety of AWS release schedules. What does DevOps agent, what's it do? I'm guessing it's gonna make developers lives easier. Just a hint, based upon how you started this.
David: It really is, I mean, it's kind of the culmination of everything I've been trying to do over the last 20 years at AWS I've been working hands-on building DevOps agent now for, for some time, and what it does is it responds autonomously to operational incidents kind of before you open your laptop.
It has hopefully fully root caused and suggests remediation steps for how to fix an alarm. It's also proactive. In that it will just scan through everything, sift through everything to find operational improvements that will prevent future. Incidents of future issues optimize things for you.
Corey: So I know nothing about the service yet, so forgive me if this turns into a really awkward line of questioning.
But there have been a number of bites at this apple historically, mostly before the rise of Gen ai. And the biggest stumbling block to all of this was either you had to build your systems in a very prescriptive way or spend a tremendous amount of time instrumenting and getting things aligned in such a way that these agents could then do anything with it.
And that was work that very often just never got done. Has that, uh, not been cracked?
David: That is exactly why we're building DevOps agent now and why I think this is gonna be the right, like swing at the apple or whatever next metaphors you want. We've always been improving developers' lives by making new services.
I, I worked on DynamoDB because I didn't want to have to do database ops anymore, so now I don't have to like set up replication and deal with backups. We built Lambda because I don't want to have to patch operating systems and deal with server failures. Those are great and they solve that real problem for customers, but of course they have to adopt, they have to do work in order to invest in using it in order to get and switch their application around.
In order to get, just
Corey: migrate your relational database to Dynamo is not really that straightforward of a lift.
David: Yeah, the migration part. Yeah. Using it from scratch. Great. But yeah, no, it's unfortunate. It's like how do we make migration easier and we keep chasing this. The advent of LLMs are actually the magic here.
Before there were, okay, everything has an API, you can make anything, talk to anything. Like that's what it, that the whole web service API service oriented architecture thing was about, but you have to do work to integrate those point I integrations to make those work together. The advent of LLMs. Is now if as long as you have, actually, arguably in the future you won't.
But as long as you have an MCP interface to a thing that isn't special purpose for a point integration, you just have an MCP server that's essentially just documentation plus API. Once you have that, anything can talk to anything and it works great. And so it's, I'm
Corey: going to say what I've been saying about that.
I feel like I've already had one for many years and it's the A-W-S-C-L-I tool.
David: The CLI is is great as an MCP server. In fact, the A-W-S-M-C-P server essentially is just pass a string. That is the A-W-S-C-L-I command. So yeah, that is a nice interface with documentation in the help, just making it tailored so it's easier for NM to use.
But, so the key is with DevOps agent is like that. Now, it adapts because of LLMs and MCP. It adapts to anything. But we've also. Built it from the beginning to be completely unop opinionated about what you're using to do your operations, what infrastructure you're running on, what frameworks you're using, what instrumentation, what, what observability, provider and tool like everything we, it plugs into everything.
Be thanks to LMS and MCP. It works. Say, you know, we, we call it AWS DevOps agent. That's because it's built by AWS, but it is unopinionated about whether or not you run on AWS.
Corey: You could use this hypothetically to troubleshoot things in a completely different environment.
David: That's right. In fact, like part of GA support was like just built in, baked in integration for applications running on Azure.
It works for any cloud you run on as long as you provide an MCP server that can describe your resources.
Corey: Some people have hobbies or they grow an herb garden. I do something very similar. I have a test Kubernetes cluster running in the spare room as one does, and I. Basically have been letting Claude code tend the garden for me, for lack of a better term.
I'll let it go ahead and run and make changes to it. And periodically I have to slap a chainsaw out of its hands because, oh, it looks like this thing isn't working. I'm gonna blow away the volume and recreate like that. There's data on that. Maybe don't do that. Uh, it's the, it's the guardrail thing of things that might make sense in some contexts.
Could be dangerous in others. But again, there's nothing critical of this. I'm not that irresponsible, but how do you wind up, I guess. Avoiding the temptation to do things that could themselves be destructive. For the LLM, not you personally. I mean, we all wanna burn the office down some days. I get it.
David: AWS DevOps agent, we're very intentional about what it can do, what the data can look at.
So just like any, any AWS service, you can figure, you give it permissions to say you're, you can look at these things. We actually scope that down so you can give us permissions and if you try to give us too much permissions, we'll actually just take fewer permissions. So we are very intentional about. It can do read only operations, only certain read only operations.
Just a, a guardrails are basically where. Like the main focus of the AWS DevOps agent, and when it suggests, it says, Hey, you should, here's a fix. Here's how to get yourself out of this operational situation. There is in the moment fixes we, we pro produce what kinda what we do at AWS. Whenever we make a change, a man, like any manual change, we write down a very deliberate set of steps in a very deliberate order of, okay, I'm going to.
Run some commands to make sure that the world is what I think it is. These pre-validation, I'm gonna record the current state of the world. I'm gonna make the change, have my rollback steps ahead of time and post validation steps to make sure that this had the effect that I want. We call these, uh, change management documents.
That's how we present any suggested change. A very deliberate and methodical thing to, that you can use to. To execute that change safely. And then when we suggest a coding follow up change, we give you a, an agent ready specification to give out the agent all the context about the what and the why so that it, it goes for the right approach.
Corey: One thing that I've always been a fan of when it comes to letting these agents run loose it, it con it prescribed ways, let's be clear, has been that they sort of embody that, that spirit I always look for in SRE hires or DevOps folk. With the, uh, never give up, never surrender approach. Whenever they get blocked in a particular troubleshooting direction, they'll come up with some way to solve the problem.
And some of 'em have been pretty freaking creative. TCP dumps after, uh, netcat experiment being fired off SSHing into the far side. Just weird in-depth stuff, s tracing to see what the thing's actually doing. It's, it gets really deep, really quickly, and I love that aspect of it. So this almost feels like there's a balancing act of how do you let it continue to drive for those solutions while not also letting it drive your company into the ground?
David: That's definitely why we, with DevOps agent, we said, okay, for production operations, let's give it a very specific subset of tools, not the Shell, that type of thing. Until we have that proof that things like shell commands, uh, writing its own code are are safe, then we're gonna continue with these like very well easy to understand perimeter and guardrails.
Corey: So you've obviously been running this a fair bit yourself in your environment, which is always dangerous in that it works super well at companies shaped like Amazon is not as broadly applicable as one might think. So there's the, okay, now you start testing it on pre-launch customers. I believe it was in public beta for a while after reinvent and.
What have you seen as this has unfolded? What areas does it excel in and what areas is it still not doing as well of a job as you might hope?
David: I'd say it's, it's pretty good at figuring out these, sifting through a ton of operational data and figuring out something that, like a little needle in the haystack for that can explain an alarm incident that would've been taken a long time to, to figure out.
There's a company, Western Governors University, they ran it on a, a thing that had. Like kind of reran it out, they adopted it and then took an incident that took them two hours to figure out the root cause. This is a while ago, an earlier version of DevOps agent, or an earlier incantation of it. So in that two hours that they had done it manually and it figured it out in 28 minutes, the applications I I, I found myself making a lot of applications.
That have nothing to do with AWS doing it not the Amazon way, like the internal Amazon way because, uh, for this exact reason to make sure like it, it's actually teaching me a lot about the tools that exist in the real world too. It's a lot of fun. Like even then, like when we started, when I, when I first set it up at Reinvent and used it, it root caused a certain, uh, operational issue in like 11 minutes.
Uh, and. Now I ran reran that scenario just a week ago, and it took four minutes. So we've been making a lot of improvements. It's very good. At that alarm,
Corey: is that consistent? Or if you run it a third time, would it then take 45 minutes? Like there, the, one of the things I found is it feels like Claude, which is what I use for a lot of this stuff on Claude Codes incarnation.
It has smart days and dumb days, and it, it's, you never know what you're gonna get. It's like roll for initiative. Every time you have it dive into something.
David: It's, it's relatively consistent. I think. Um, a lot of the, what we. Our building behind the scenes around context, around your application, learning about your application.
I guess it's not the same every time because we're learning from past troubleshooting. That's obviously a trade off of making it so that it doesn't like have target fixation on a specific, a specific past run.
Corey: I know what DNSI just have to prove it.
David: Well, that's, that is actually baked in. Unfortunately.
It's always dns.
Corey: There's no way for it not to be. The problem, of course, is when it's dns, it. When it can't find the SAPI endpoints, well suddenly we're not going to space today. But yeah, that, that's always the challenge. How prescriptive is it as far as this troubleshooting methodology? Is that baked in?
Does it follow whatever computer equivalent gut instinct is?
David: This episode is sponsored by my own company, duck Bill. Having
Corey: trouble with your AWS bill, perhaps it's time to renegotiate a contract with them. Maybe you're just wondering how to predict what's going on in the wide world of AWS. Well, that's where Duck Bill comes in to help.
Remember, you can't duck the duck bill. Bill, which I am reliably informed by my business partner is absolutely not our motto. To learn more, visit doc bill hq.com.
David: We have baked in kind of how we go about troubleshooting into it. Uh, that, you know, helps, uh, we find forward these kind of specific alarm triage.
'cause the goal of of alarm triage is actually not to understand the root cause. We call it root cause. But that's actually not ultimately the goal. The goal is to figure out what mitigation step would make. Pro the problem. Stop the customer. Impact stop. That's the actually the goal. And so we baked that in.
Um, talked about this in, in some past reinvent talks of a somewhat ridiculous name for it, but it, we call the Grand Unified Theory of Incident Management is that, uh, any production impact can be mitigated by in through the pursuit of, of looking to see whether it was a deployment, like a change, that, that broke it, a change in inputs.
Like a traffic spike or, or passing new parameters or stopping passing parameters. A failed component like the application crashing on one server, one availability zone, a dependency, uh, or running out of something. These are kind of all recursive. They kind of refer, they point to other parts of, oh, it's not.
It's if, if there's a change in traffic, well let's go investigate the caller to see if they changed. They did a code deployment, so it's a recursive kind of exploration. We found that to be very useful, and so we've baked this kind of thing into the agent, but we've also known that we, we also know that that isn't always everybody's goal.
Their goal isn't to always find mitigation. Sometimes they really just have some ad hoc operational tasks that they want to figure out what's going on. So that would, that's why for GA has a little bit less of the customers using it every day. But we've launched on demand tasks, so you can ask any kind of question.
It's a little bit less fixated on finding the root cause of an alarm and more just helping you in general. So that, I think, has a little bit less usage of it so far. But it's, but we know that operations is an endless set of, very varied thing. It's not just responding to alarms and that's where we're trying to help people.
Corey: Where do you find that the differentiation comes into as opposed to my, you know, crappy backyard experiment of just go ahead and run cloud code in dangerous skip permissions mode, let it tear into the thing and have fun, versus like the, the hyper lockdown approach that some shops take where everything it does has to be vetted by a change committee.
One of those leads to faster outcomes, but less system stability, and the other feels like it. It's a lot safer, but it doesn't get anything done. It feels like there's a continuum there.
David: Yeah. I feel like striking that balance is the important part, and that's where we're going with the, uh, the change, safety we're going, but still letting it have the, uh, creativity if you will, of exploring down any avenue that any exploring every a, any avenue that it might want to chase down.
Corey: You've done a lot of work in your career around the developer experience space, the DevOps world as it is. Uh, where do you think the future of DevOps lies?
David: There have been a lot of flavors of companies organizing around operations, uh, and individuals around how they get things done, uh, how they keep things running.
Uh, there's SRE approaches where you have a kind of a frontline team who also is automating things just on a more. Pure operations where there's op, like an incident response team. And what I think of is DevOps, which is, which of course is a word that has taken a lot of term and, and change over the years.
Uh, it
Corey: encompasses so many things, like you can't buy DevOps, but I sure would like to sell it to you. Yeah. It's, it's become a, a panacea for a while of, it just means you're assistant man, but if you call yourself DevOps, you'll make 40% more. Great. Good for you. Get the bag. Uh, then that became SRE, then it became platform engineering, and who only knows what it is this year.
David: That's right. I think DevOps the obviously bias because this is how I've been operating and AWS generally operates for the last 20 years plus is I think DevOps. It's DevOps is time to shine because with the advent of agentic ai, coding, everything, people are just getting a lot more done themselves. Like I love wearing all the hats under talking to customers, thinking about the product, implementing it, running it.
I like wearing the hats, uh, securing it. I like that. And I think now these tools are making that even more vital. To just be able to do more and more yourself as, as a be a little bit more generalist almost. And so I think DevOps is really the future of DevOps. With with, with my definition, of course.
Corey: Oh yeah.
If you get well. But my version is always gonna be the future. Let me define the term. I mean, that's just common sense from where I sit. Software's no longer the bottleneck. You can do a whole bunch of things in a suddenly I'm gonna have it write code to solve my specific problem. I've been doing that and it's great.
Is this robust code? Absolutely not. I mean, one of the, as I'm sure you've seen building services and products, uh, everything you thought you knew about, this gets thrown out the window the first time. You put it in a customer's hand and they try to use it like a bad hammer. There's, there's a. Other people's requirements, other people's usage patterns often cause software to no longer work the way that you wanted it to.
As soon as it violates one of those design constraints. Now I feel like that's okay. I just send Claude back to the minds to go ahead and build me another version that now does this. But back in the early days of the newsletter, when one day I wanted to start sending out blog posts, that took me three weeks to get that system to support that.
Now it's just yell at the robot and go get a cup of coffee.
David: Yeah, I think the key is that it can, like, agents are super successful when they can see their own output. That's really the key is that they can, instead of, okay, let me have it build a thing and it'll tell me, you know, give it to me and then I'll try it and see that it failed and I'll paste the error back in.
You know, that's the T dium, that's not, that's not the future. The future is where it can do more and more of that in that loop. That it runs until it works. And so that means going further into, well, okay, it'll produce a pull request for you 'cause it's run all the unit tests. Well what if it could do the integration tests?
What if it could actually, we also you this AWS security agent that can do penetration testing for you. Like what if you could have that be part of the CICD pipeline? But if you could have DevOps agent. Kind of calling it back that, okay, this thing that you did didn't work. So extending that loop out beyond just your local machine, I think is, is very important in terms of avoiding a bottleneck where things will just pile up in your kind of deployment pipeline.
With so much change, it's making sure the agent has the agency to see its own result even further, all the way through to production.
Corey: One of the challenges I found with folks taking this approach has very often been their organizations own guardrails and boundaries, where even for a human sitting down and solving the problem, they're spending most of their time trying to get access into the right account to look at the logs put out by something, which is sort of a necessary first step unless you wanna black box troubleshoot, which no one does.
It feels like companies do have to evolve to a point where tools. If not, people could be given access to some of these things.
David: I agree. I think it's the, the teams who I've seen be the most productive in their spend, they make sure to set apart time. They give themselves some breathing room to say, okay, what should we smooth out?
That was actually, I, it's kind of like these things that were, that are going to make. Teams more productive with agents actually would've made the team more productive anyway and, but now there's just even higher upside to it, so it's worth, it's really worth taking a step back. It's a nice excuse I've always found.
Having these nice excuses to take a step back and just breathe and improve developer life, even within a team, has been so great. Like when we realized many years ago that like. Our changes were actually piling up and like it was the, the time from check-in to actually reaching production was, I think weeks on average.
And this was not ideal. We wanted to be able to innovate faster for people. And so we looked at this and this is when we're like, let's do this continuous deployment thing and as an entire company, and that was just a great rallying North Star. We built an internal pipeline system so that everybody could manage their code as it was flowing through, automate as much as possible.
But teams still had to do the things like. Well if, if teams were hesitant to turn it on kind of full auto, why were they hesitant? Well, we don't trust our tests. Okay, spend time and write the test. 'cause it'll pay off. Like, oh, we don't trust the deploy. Well okay, have better monitoring. So it'll roll back.
Well what about this edge case? Well monitor that. So it's really this nice rallying north star of how can we just actually make this all operate a lot better? And so this is a good time for that with ai.
Corey: It seems like there's an opportunity here for a lot of companies to. Improve their processes with this just because something like a powerful LLM assistant like this that keeps smacking into the same procedural guardrails and that being forced to wait for someone to escalate is going to sound a lot more credible than when an engineering team does it.
Oh, our engineers are just crappy slash lazy slash maliciously complying with policy. Okay, now the robot's doing it lands a bit differently, I suspect. I wonder if that starts to act as internal ammunition to drive cultural change.
David: I think you're right. I agree.
Corey: That's in phase two of the bot. It's gonna be a dashboard explicitly aimed at that.
I'm hoping now it's, uh, the problem is your crappy policies great. That that has its own problems. Just write code the way that we do. And thus was launched Kubernetes. It was, uh, the world in which we live. So what's exciting to you? What if we look at the world four years ago versus today? Technically speaking it.
It's hard to recognize just how far we've come in a short period of time. Predicting the future is always dangerous, but what do you think is coming down the road next?
David: You know what? I look at the kind of evolution of operations at AWS over the 20 years. Like the one thing that's remained. Well, constant, of course, constantly changing, but, uh, constant in terms of the priority is the culture, is you, you really have to, operations is one of these things that it can be too easy to backseat to a squeaker wheel.
And so that's one thing that everybody, like all companies have to think about is how do we make sure that it operations is something that people care about all the time. Especially important for AWS because. You know, that is sen that is our job. We are doing the ops so you don't have to like, that is the whole thing.
But the things that have changed is, is kind of maybe the maturity around it, around like starting from when we were first building AWS there, nobody had built a cloud before and so we are kinda figuring teams are figuring it out. Of of out the right ways to do things. Like how do you measure availability?
So teams would measure it by like, when I was on DynamoDB or we were measuring latency. Well, latency is the, you look at percentiles of how long it takes to respond to certain APIs. Well. When we were building DynamoDB, it was all about predictable performance. That's the main thing. That
Corey: slow is slow and, uh, consistent is better from a lot of use cases than fast.
But spiky
David: well that, that's why DynamoDB went. We went for Fast and Consistent. That was the
Corey: That was real. Exactly. Hey, hey. Well if I could have all three of the constraints, that one, then terrific.
David: Well, playing with Cap Theorem is.
Corey: Why would you ignore Cap Theorem? Sure, why not?
David: I actually, we were able to play some really cool tricks with it.
But anyway, so measuring performance even, so, okay. How do you do that? Well, we found in early pre-release beta, we were seeing our graph, our latency graphs were flapping, well, what's going on? And actually, and. Wasn't that the latency was actually flapping, it's our measurement was because we were clumping together, kind of one kilobyte reads that were buffer pool hits with four megabyte scans that were going all over the disc, and that was in the same measurement, so it's, oh, let's break that out.
Let's have. It's three different latency buckets. So we'll have the buckets for the, the super, like super small reads. Another bucket for the medium sized reads, a bucket for the large reads. So these kinds of operational practices everybody was learning. That was kind of the first phase. Then we started writing it down and sharing it more to say, turning things into checklists, saying, okay, well.
It is kind of like well architected where you would say, okay, we know that it's important to do these things and organize a pipeline this way, scale this way. Uh, have resiliency set up in, you know, in multiple availability zones. You started having checklists. Then we started automating the evaluation of the checklist to say, well, let's just let teams know if they have something that isn't following.
What we think is probably a better way to do it. Then we started fixing things automatically, saying, okay, here's a poll request that updates Java for you, like upgrade and all your dependencies, like something that would take you a long time to do. So I think the future. He's like, if I follow that trend line of automating operations, I think the future is going even further to just getting it done.
Like the operational backlog of improving any application is, is endless. There's an unlimited number of things you can do to improve your operational posture in a system, and everybody wants to reach further into that backlog. And so I think just the. Getting it done for you. But that means agents, they need to be able to fully like load test.
They need to be able to fully do everything to validate those changes. Like pushing a change to, to upgrade your application to a newer Java runtime. Before that wasn't the easy part, but now that's, that's actually kind of the easy part. Like it, it's actually the how do you make sure that deploying that will is actually going to work well without any hiccups along the way.
That's the hard part.
Corey: Is there a future where the entire infrastructure. DevOps side of the world today, that's an entire career field. Just becomes something the platform does under the hood where people don't think about it of here's the application, make sure it runs, do what you've gotta do, and it just goes down to application development in a lot of shops.
David: Well, I think there's a difference between it happening automatically and people not thinking about it. I think people always have to obsess over their customer experience. And this is a lens ops and understanding that your operational characteristics are a lens with which to look at, understand the customer.
Um, so that's not gonna go away, but if I look at what coding agents are doing in, and the, the amount of complexity that they're encapsulating in a spinning wheel. There's a spinning wheel. Oh yeah. Working on working, oh yeah. Working on implementing your entire application and then it's done. You know, that's, that's, that's all a spinning wheel, an enormous amount of work abstracted into that.
I think DevOps is like the next one of like, okay, now let's like improve your resiliency. Let's optimize your application. Let's add monitor. These are, yes, these are things that are just accessible to anyone as just, it'll just happen for you.
Corey: These are exciting times. These are fun products that are really making a lot of the historical toil, not nearly as toilsome even.
Even as a guided assistant, it's okay. I'm out of ideas to troubleshoot this robot. Do you have one that it starts to almost become a collaboration aid?
David: Yeah. It's kinda like rubber ducky, like debugging. It's accepted actually, will. Give you some ideas instead of just being a reflection of your own.
Corey: The rubber duck has learned to talk.
It's, it, it, they've, we've wound up in a very strange place suddenly and it happened very quickly.
David: Yeah. I have this rubber duck right here. Actually, this, I just picked this up. It's a Captain Jane way. That's, that's my, my rubber duck of choice.
Corey: Yeah. I, I've started sneaking them into the house wherever my wife least expects.
I call it redecorating and I'm going to have to deal with the consequences of that someday, but not today. David, thank you so much for taking the time to speak with me. Yeah. If you wanna learn more, we're sure they go. What's the best place to find you?
David: LinkedIn, X blue sky, the fragmentation of Of every social media thing.
I'm there. Happy to chat.
Corey: Fantastic. And we'll of course put links to that in the show notes. Thank you so much for taking the time to speak with me. I appreciate it.
David: Thanks for having me. A lot of fun.
Corey: David Yek, a senior principal engineer at AWS. I'm Cloud Economist, Corey Quinn, and this is screaming in the cloud.
If you've enjoyed this podcast, please, we have a five star review on your podcast platform of choice. Whereas if you've hated this podcast, please, we have a five star review on your podcast platform of choice, along with an angry, insulting comment that you hopefully were able to get the AWS DevOps agent to write for you.