Jeremy chats with Chase Douglas about how serverless applications change our development workflows, what a local development process looks like, and some tools we can use to help make our lives easier.
About Chase Douglas
- Twitter: @txase
- Email: firstname.lastname@example.org
- Stackery: stackery.io
- Stackery Blog: stackery.io/blog
- Stackery Changelog: https://docs.stackery.io/en/changelog/
- Live Stream Series: https://app.livestorm.co/stackery/
- Portland Serverless Meetup: https://www.meetup.com/Portland-Serverless-Architecture-Meetup/
Chase: Hey, glad to be here.
Jeremy: So you are the CTO at Stackery, which is in Portland. Why don't you tell the listeners a little about yourself and what Stackery does?
Chase: Yeah. So I'm the CTO and co-founder of Stackery and I've spent my career figuring out how to manage complex systems. And as serverless as an architectural pattern started to take off I was really interested in finding how do we help people adopt that architectural pattern more easily? Stackery is a product, a tool set that makes it easier for anyone from individual developers on up to teams and organizations, but especially at larger sizes, manage to design, manage environments, deploy and at the other side, help monitor their serverless applications.
Jeremy: I wanted to have you on the podcast today because I want to talk about serverless development workflows. And I think when people start moving into the serverless paradigm, we gotta sort of change the way that we think about developing applications and that you know everything from whether you're developing locally or developing remotely, or whether you're trying to do something like offline emulation or trying to do the remote testing things like that. Just what are some of your thoughts on sort of this idea of these serverless development workflows? How does it sort of change things?
Chase: Yeah, serverless itself is a different way of building, developing and including testing applications. And one of the things that we have to step back and recognize is that at the end of the day, we're still developing software, we're still testing software, but we need to find the right ways to be efficient at how we do those. It's slightly different in a serverless world, and so we once we find the right patterns. And once we start to use those as an individual or in the team, things actually speed up once again. So there is an interesting play here. Uh, but it's all about just finding the right mix and match of how to do the things we're familiar with when it comes to the development and testing.
Jeremy: Yes, so that makes a ton of sense. So what I think I'd like to do is sort of dive down into a number of these different topics, you know, break it down a little bit and get into I mean, because again, you're an expert. Stackery obviously, is all about building out these workflows or helping developers build these workflows. So I want to get into these these details here and let's start with sort of just really maybe 30,000 foot view. How has cloud sort of changed the way that we develop software?
Chase: Yeah. So the way that we've always developed software up until very recently was it would, in the end, be running on servers, whether it's in a data center or in the cloud. But these servers were monolithic, compute resource. That meant that typical architectures might be a LAMP style stack. You've got a Linux server, and you've got a MySQL database off to the side somewhere, maybe on the same machine, maybe on a different machine. But mostly as a developer, you're focused on that one server, and that means that you can run that same application on your laptop. So were we become very comfortable. We built up tooling around the idea of being able to run an entire application on our laptop, on our desktop in the past, that faithfully replicated what happens when that gets shipped into production in a data center or in the cloud. With serverless, everything is kind of a little works differently. You don't have a monolithic architecture with a single server somewhere or a cluster of servers, all running the same application code. You start to break everything down into architectural components. So you have an API proxy layer. You have a compute layer that oftentimes is made up of Lambda, though it can include other things like AWS Fargate, which is a docker-based, serverless, in the sense that you don't manage the underlying servers approach. So you've got some compute resource, if you need to do queuing instead of spinning up your own cluster of Kafka machines, you might take something off the shelf, whether it's SQS from AWS or their own Kafka service or Kinesis streams. There's a whole host of services that are available to be used off the shelf. And so your style of building applications is around how to piece those pieces together rather than figuring out how to put those and merge those all into a single monolithic application.
Jeremy: So how then do developers need to think differently? I mean, again, I'm super familiar with the LAMP stack. That was probably where I started. Well, I started with Perl and static text files, but we won't talk about that. But as we got a little bit more advanced and we started using things like the LAMP stack, obviously, it was very easy for us to just either test it locally or to even building in the cloud. It was, or not the cloud on our hosting provider, but we could just easily upload a new file and things would magically work for us. But as you mentioned, things get more distributed. Right? Once we go into this cloud environment, you've got multiple services working together. You don't own those services necessarily. If something breaks with those service is you kind of have to deal with that. So maybe what are some of the limitations that a developer might have to deal with when they're starting to move, you know, their production workloads to the cloud?
Chase: Yeah, for all the benefits you get from serverless, with its auto scaling and its capabilities of scaling down to zero, which reduces developer cost, you do have some things that you have to manage that are a little different than before. One of the key things is, if I've got, like, a compute resource like a Lambda function in the cloud that has a set of permissions that it's granted and it has some mechanism for locating, the external service is like SQS queue or an SNS topic or an S3 bucket. So it has these two things that it needs to be able to function the permissions and locations. So the challenge that people often hit very early on in serverless development is if I'm writing software on my laptop and I want to test it without having to go through a full deployment cycle, which may take a few minutes to ah to deploy the latest code change. Even if it's, ah one character change up to the cloud service provider. How can I actually test with proper permissions and proper service discovery location mechanisms from my laptop? What mechanisms are there to do that? That's something that we are always evolving. But ah, especially here at Stackery, we have some some interesting ideas of how to make that easier.
Jeremy: Well, so what about maybe some of these these ways that we try to replicate the the cloud environment locally? So we can run Docker containers, maybe that simulate this. We've also have, you know, we can do mocking. We can do stubbing. We can do some of these other things. Why are those a good idea or not a good idea?
Chase: Yeah, there's two different approaches. There's the fakes where you run a fake version of DynamoDB. A fake version of S3 and you run those oftentimes by running docker containers on your own laptop that you facilitate how your compute resource, your function can locate it. But the challenge there with running fakes is that docker is a little bit of a beast to run. Ah, and not everyone has 16 gigabytes of memory on their laptop. While you can spin up a fake S3 service and you can spin up a fake dynamoDB table, it can be quite challenging to spin up fakes for all the different services that a modern application consumes. Many applications that we see are comprised of a dozen different services in so faking all those is challenging. On the mock side, that's where instead of running a full fake service for testing purposes, you've got the ability to say, oh, pretend that there's an S3 API that responds with these data when it when certain requests are made to it. The challenge there is, uh, you still have a fidelity challenge of what happens when the service is updated and has new features that you need or slightly changes, or the mechanics are slightly different at different scales. Uh, these are things that can only reliably be tested in the cloud. Ah, and not on a laptops. You get kind of ah ah, partial fidelity out of it.
Jeremy: Right. And I mean, the other thing you have too is when you start to test some of these complex workflows, right? If you're doing you an SNS to SQS and then consuming that with the Lambda or you're implementing something like the Event Fork pipelines that AWS recently released or trying to do, maybe some sort of fan out process and things like that. You're not going to see the sort of how those actually work locally until you deploy those to the cloud.
Chase: Exactly. Yeah, again, it goes back to that idea that your applications now are a graph of connected resources it's not a monolithic service. And so how do you connect all these things together if you have a function that writes to a DynamoDB table, but then there's a stream coming off that DynamoDB table of all of the events that have occurred, how do you model that with mocks and fakes? I'm sure it's possible, but it is not very straightforward. And so then you do it for one service, DynamoDB over here, now you have to do it for all the other services as they have all their different ways of spawning events and interacting with other resources themselves.
Jeremy: And I can imagine that could get pretty tough when you start working with teams, right? And you have maybe one thing is mock, maybe one thing is, you know, like you said, is using a fake to process the data or whatever that the testing there gets pretty difficult.
Chase: Exactly a lot of times when we're talking about fakes and mocks anyways, it's to fake or mock away one part of the system as you develop and test a different part. But what happens when you're done developing and testing that one part and you need to focus on a third part? You may need to tear down those mocks of those fakes. You may need to completely buildup, different, mocking and faking. And so you sometimes see sort of an explosive growth of test framework mechanisms to be able to test everything in lieu of actually testing things in cloud-like environment.
Jeremy: So why don't we talk a little bit about the development tools that are available to serverless developers. And I think that where you have a lot of experience here is obviously with working with customers for Stackery, you get to go into these companies, see how they're doing it now, and give us sort of a really, I think interesting insights into what other companies are doing. So could you maybe give us a typical development workflow? Or what you see is the typical development workflow when you come into a company?
Chase: Yeah, that's ah, it's a great question. There's a lot of people out there as well who are fairly new to the whole serverless ecosystem. And we have learned, as we've engaged with many of them that Stackery, what that looks like and and what they feel. There's a lot where people understand based on what they find online, that serverless has a lot of benefits, and they want to achieve those benefits. Whether it's scalability, managing costs or making cost more predictable. Whatever it is that that drew them to the possibilities of serverless, then they needed to take the next step of "how do I realize that?" What tooling is out there to help me build my application? People oftentimes do a Google search, they come across the Serverless Framework, which is a great entry point into the serverless ecosystem. But then one of the places that starts to stretch too far is as soon as people get beyond building a single function or a function that is in response to one set of events sources and they need to expand into a graph of sources where you might have an API that has a route backed by a function, which needs to access a DynamoDB table which has a stream that's gonna create another function, that's all technically possible to set up inside of the Serverless Framework, but it starts to become very challenging to do that without diving into the raw CloudFormation syntax, which is the infrastructure as code that the Serverless framework compiles down to. So people then start to look around and many of them come to us. Starting to ask questions around, how does this grow beyond a view of just functions that respond to events? Beyond that, how do I manage environments so that I can deploy my own version of our application? And each of my team members can deploy their own versions into different environments? And into production and staging and test environments, and how to manage credentials. And the really interesting and fun thing is that there's there are great solutions out there for all of this. There's AWS systems manager parameter store for parameters. There's AWS Secrets Manager for managing credentials, CloudFormation and SAM, the serverless application model, which is kind of an extension on top that AWS publishes. These all provide great mechanisms, and people just need ways of, of piecing them together in the same way that they're piecing together the underlying serverless Lego blocks to build their applications.
Jeremy: Yeah, and so the other thing, too, is a sort of a CI/CD pipelines that you see a lot of companies deploying. I mean, there's always articles out there. Here's how you do it with serverless. Here's how you do safe deployments, things like that. Is that a challenge, it seems pretty straightforward. I've set a few of these up myself. Is it relatively straightforward? Or is this something that you see a lot of customers having trouble with as well?
Chase: Well, lots of people want to know, how do I set up a CI/CD pipeline for serverless? And in fact, that's one of the top questions we get, but it actually isn't quite just, how do I set up a CI/CD pipeline? Certainly, that's part of it. That's part of any proper application environment. But pipeline is really the key here. They're asking more, they're asking not only once I got code written how do I integrate and deploy that out effectively? They tend to be asking for what is the entire workflow process from how I set up a project, how I manage it inside of a git repository, how I manage that parameterization, the credentials, the processes of workflows that individual developers go through to build out that application. So there's a stand in that people will say, "how do I build a CI/CD pipeline?" because they don't have a term for the everything before that. That's really about the full development workflow and life cycle.
Jeremy: I think you bring up a really good point, cause that is one of those things where I feel like the entire workflow changes. Right? So it used to be like we talked about at the beginning, you make a change to the the code, you uploaded to your hosting provider or even if you did have a CI/CD process, you check that in, you go through the git workflow sort of approval process, and then you automatically kick off builds. I think a lot of that is the same, but certainly when it comes to doing some of the testing, I don't know if it's as easy anymore, as just saying, "Oh, we're going to create a test environment or dev environment or staging environment and then a production environment." I think you have a lot of individual developers that wanna work with their own environment, like maybe have their own test environment and you run into the problem there and maybe not so much a problem, but just something that you have to sort of understand is, you know, how do you create all these separate test environments for your individual developers and maybe even beyond that, what if you are using external service is like a, let's say you're using Aurora Serverless, for example. Do you create a separate instance for every single developer? Do you have a set of shared resources that multiple developers can use? What's the best practice for that?
Chase: Yeah, that again is one of the leading questions that we hear from people. And the challenge there is that AWS has built up all of these services under this idea that you use them within this single account. And over the past few years, they slowly realized that there needs to be some mechanism for creating something like an environment. And so there's ways that we help our customers manage environments if they need to deploy into the same AWS accounts through proper name spacing of resources, but one of the most effective mechanisms for achieving environment isolation is actually setting up separate AWS accounts. So what we suggest to our customers is that they create separate AWS accounts for each of their environments: production, staging, testing, but also for each of their developers. So Bob, Sue, Joe and all the developers on a team, it's actually lightweight enough, and it doesn't cost anything to set up AWS accounts all underneath what AWS calls an organization umbrella and then within tools like Stackery, you've got the ability to tie each of those accounts to different environments so that each developer can have their own environment. Now, the reason that is oftentimes glossed over why this is something new in Cloud Services, certainly was never really done in data centers you didn't have separate environments in a data center for each of your developers, is the fact that you can now actually spin up individual serverless environments and applications at full scale for each of your developers. And it doesn't cost anything or cost very little to run at a steady state. That's one of the key benefits of a serverless architecture.
Jeremy: Yes, I love that idea of isolating individual developers, especially when you start developing something against, DynamoDB tables or even if you're using RDBMS, and you need to have something like Aurora Serverless, which you can scale down to zero and shut itself off if you need to. But yeah, that's definitely great advice. And that's certainly something that I recommend as well. So before we move on to the next subject, let's just go over quickly some of the IDEs that are available to serverless developers. Obviously, Stackery has a web interface right now, but what are some of the other ones? VSCode is a good example. What else could developers use if they were looking for an integrated development environment?
Chase: Yeah, this kind of gets at the heart of one of the key loops that we talk about within Stackery, about how developers build applications. There's sort of this outer loop that is around infrastructure maintenance, management, building. So as you need to create new functions, you need to create new resource is like DynamoDB tables or change the routes of an API. That's this outer loop. There's a process in which you have to build and define that. And as you mentioned, one of the tools that, one of the ways we help with that at Stackery, is we have a visual editor that slurps in your AWS CloudFormation, or SAM, or even the Serverless Framework projects, and it lays it out visually, allowing you to add new resources by dragging and dropping, and wiring them up. But then compiling all that back down into the native raw CloudFormation or Serverless Framework, SAM, what have you. That workflow helps with this outer loop challenge of managing the infrastructure. And then there's the inner loop challenge of how do I edit the code that is in my compute resources and quickly iterate through testing and reediting that code. And so one of the IDEs that people oftentimes use, VS Code is extremely prevalent as it's got a great system for NodeJS and Python, which are the two highest usage languages in serverless. So Visual Studio Code's a great one. Jetbrains and their platform is a great one. PyCharm for python as well. In fact, this has led us to ask ourselves, what could we do to make both of these loops, the outer loop where you're modifying infrastructure, and the inner loop, faster? And so, just this past couple of weeks ago, we launched integrations through a Visual Studio Code Plugin that makes it side by side, you're able to have your template for your infrastructure open and you're able to visually model all the interactions between resources where you can drag and drop. And as soon as you drag in a resource, you see it update in the template and vice versa. And that helps with the outer loop. And then on the inner loop, we also have launched this local invocation mechanism where we were talking earlier about how challenging it is to run your compute code on your own laptop and yet still have the fidelity of interacting with cloud resources without doing mocks and fakes.
Jeremy: That's sort of the last thing that I want to talk about. I want to get into this idea of how do we develop locally but use these cloud resources, right? And so you've used the term, I've heard the term before "cloud-side" or "cloud local." You know, just the idea of bringing the cloud down to your laptop, as you've said in the past. So let's talk about that, right? So before we get into what Stackery does, what are the challenges right now for somebody that's trying to access remote resources when they're developing, maybe a Lambda function locally?
Chase: Yeah, you start with some code, and that code for a Lambda function has this handler that gets invoked. One of the things that I did early on when I was starting to play with this to try and speed up this this iteration workflow is "well, I could write a little wrapper script that invokes that handler code function with some test data just to get it running locally without having to deploy it all out." And, there came along some tools that kind of helped facilitate this mechanism. AWS Sam, their tooling has SAM local invoke where it will take your function code, and it will actually spin it into a docker container and run it as though it's in a proper lambda environment. Ah, the Serverless Framework has a similar thing. But even there you have a challenge where the permissions that your function has is based on the permissions that you have locally on your laptop. Now, a lot of developers, they have permissions on their laptop, but they have sort of administrator permissions. They can, if they wanted to interact with any resources inside of their AWS account. Whereas the function that you're building it's tied to a very specific set of permissions where you don't normally give it full administrator access. So you have to sort of a lot of times you get your code working. And then as a second step, you have to figure out is the code still working when I deploy it to the cloud and I've got the permission set the right way. And then lastly, you've got the challenge of that service discovery piece where if I'm running on my laptop, how does my function know which DynamoDB table it should be interacting with, which SQS queue it should be sending messages to. So you've got to solve these problems through some mechanism, and a lot of people come up with their own little test scripts on the side that help here and there. But there's a real challenge there around, ah, having a workflow that a whole team within an organization can uniformly use and provides them with that sense that they're bringing the cloud to their laptop locally.
Jeremy: Right, and it gets even more complex when you start thinking about different stages, right? So you're always publishing to DEV or to TEST or to PROD or whatever you've named them. And so if you are trying to access your DEV tables or your SNS topic for DEV and so forth, and then you also have the problem to where, even if I do publish to the correct SNS topic in the DEV environment, if I've got other Lambda functions or SQS queues, subscribed to those, they're going to get those, those requests are gonna go through, so even if I'm testing locally. So you just have a whole bunch of things where there's probably not a perfect solution to it. And then you and I have talked about this, I wrote a little plug-in for the Serverless framework. All it does is just resolves the references in there. But you've gone a lot further with that by actually taking care of at least the permission side of it and that service discovery, as you said, right?
Chase: Yeah. So what we did was we piggybacked on top of AWS SAM local and how they invoke things inside of that neat little docker environment. And we went beyond to say, what if we know where you've deployed an application into an AWS account and we can go and ask well, for this function deployed under this name, why don't we go grab all of the environment variables which are often used for sort of the service discovery mechanism you'll put. If your function needs to talk to a DynamoDB table, you'll put the name of that table in an environment variable for the function, using some magic that CloudFormation has. And so, uh, we go out, we grab the environment variables for the that Lambda function, which helps with the service discovery. We also go out and we do something called assuming the AWS IAM role, which is what holds the permissions that the function is granted. We assume it, which means that we get credentials that give us the same permissions. And then we feed that into the local invocation as well. So now you are running your code inside a proper Lambda context, in terms of the operating system image. You have the same environment variable values that your function has in as it's registered inside of your AWS account, and you have the exact same permissions. And so your iteration loop here is on the order of if you make a code change, or you make a code change even if it's a one line change or if it's, you know, adding whole files, whatever it might be, we watch for changes on the file system and we rerun that function. And so we're talking about on the order of about five seconds to fully retest a function that you are that you're developing, all kind of in that cloud local environment.
Jeremy: Yeah, and so the other thing you had mentioned to about the plugin that you developed for VS Code. So if anyone has developed sort of a, uh even the most basic of serverless apps, once you start adding resources, you know, it's always funny to me, especially if you're doing SNS topics with subscriptions, you have to create the permissions for the subscriptions it's like seven different things you have to do for every SNS topic with subscriptions that you want, and I have to have that in your resources. So this is going to take sort of the experience that I get on Stackery now and you're gonna bring that right onto my local laptop. I don't have to communicate with the web or do any of that stuff. It's just gonna run locally.
Chase: Yeah, one of the things that as we built Stackery and it's got this this neat visual editing that, really unlocks the power of CloudFormation and all of the AWS services inside of AWS for people, it actually becomes almost like a learning platform for them as they become more comfortable understanding, oh, if I drag this thing on in on this canvas, all I see it doing is adding these few resources in the CloudFormation template. They start to become more comfortable with it, and then they start to learn, and they start to be able to teach that to the rest of the team. It's been interesting seeing that become, ah, kind of learning platform. But leaving that aside, one of the challenges that some people had with the Stackery visual editing was that there's this context switch where, if I have to create a function and then modify code, I would log in to Stackery in a web browser and I would add a function, I drag it into the canvas and wire it up, and that's great. I would commit that to my git provider, but then I'd have to go to my IDE and I'd have to pull down the changes. And now I'm ready to write code, and when I'm ready to test, I have to push that back up to git, and then I have to go back to Stackery and tell it to deploy. So there's a lot of context switching involved, and so one of the really exciting things about what we released a couple weeks ago with our Visual Studio Code integration is the fact that we're bringing all of that into the IDE, you don't have to leave the IDE, when you need to modify your infrastructure. You don't need to leave the IDE when you need to deploy out to your AWS account. You don't need to leave the IDE when you're iterating on your code. And so this is really that next transformative step in helping people build serverless applications effectively.
Jeremy: Yeah, I mean, and for me, just the amount of documentation surrounding CloudFormation is overwhelming and as good as the documentation is, and it has what it needs. Sometimes the examples aren't always super clear. You always have this issue where it says whenever you have nested sort of nested values, it will say, Oh, this requires this SNS topic value. Whatever it is, you have to click into that and then see what that has, and that may have nested things. So I think you really hit the nail on the head when you said, you know that you can use it as sort of a learning experience or a learning tool, because it does, it gives you all of the possible options. You don't have to go and look at all this documentation, it kind of does it for you. And of course, once you do these things a few times, certain things become sort of second nature to you. But definitely, really, really great feature. So, I mean, I love what you guys are doing over at Stackery, you have a great team over there. I think people could can really benefit from from using this just even if it's just use it as a learning platform. But obviously, you know, to become customers of yours, I think would be great as well. But well, it was so I think I think that probably wraps us up. Um, you know, thank you so much for joining me Chase and for obviously continuing to share all of your knowledge with the serverless community and with what you guys are doing at Stackery. So how can people find out more about you and what Stackery is up to?
Chase: Yeah. So our website is stackery.io and we push things out on our blog very frequently, at least once a week, if not much more.
Jeremy: I like the name, by the way. Stacks on stacks.
Chase: Yeah. Thanks. Thanks.
Jeremy: A good name.
Chase: Yeah. One of the most interesting things is we've got a changelog. And we have some people in our team who are exceedingly witty. So I don't know if a changelog is the most exciting thing to most people. But, you know, in the past month, it's been, ah, adding support for visually editing web sockets API, to the cloud local invocation mechanisms to the IDE integration. All these exciting things, changelog has it first. And, you can always reach us on Twitter as well. Personally, my handle is @txase on Twitter, and I would be happy to answer any questions people might have. Lastly, I would say inside of Stackery itself say that you're thinking of doing serverless. Maybe you've tried it, but you have had a hard time figuring it out. You're just not sure you don't have the confidence of how to build a successful serverless application. If you sign up and you start using Stackery, if there's any questions you might have, one of the most prominent features of our application is there's a little chat box down there. And, we've staffed that all the time and we get back to people, not just about like, oh, I can't figure out how to do this in Stackery but like, oh, I need to add a CloudWatch alarm on a random resource metric. I don't have any idea how to do that with CloudFormation, and we help people get off the ground running. Oftentimes it just takes a couple of interactions to understand how AWS accounts work, how CloudFormation works. We'll be there to help you figure that out and then get you off to the races.
Jeremy: And you guys have a, speaking of teaching, you have a live stream every Wednesday, right?
Chase: Every Wednesday. Yep. We had you on a couple of weeks ago. It was great, it was one of the best ones I think when you talk about the architectural patterns.
Jeremy: Well I appreciate that. And then, last thing again, I know you guys do a lot for the community. You host the Portland Serverless Meetups, right?
Chase: Yes. Every month we've got our Serverless meetup, taking in the best serverless practitioners from around the area.
Jeremy: Awesome. All right, well, I will make sure we get all of that stuff in the show notes. Thanks again, Chase.
Chase: Thank you. It was a pleasure.
What is Serverless Chats?
Serverless Chats is a podcast that geeks out on everything serverless. Each week, Jeremy Daly chats with another serverless champion to explore and do a deep-dive into specific topics in the serverless space.