In this episode, Jeremy chats with Gareth McCumskey about a number of production-ready serverless use cases including RESTful APIs, GraphQL, WebSockets, and capturing clickstream data in PART 1 of this two-part conversation.
Gareth McCumskey is a web developer with over 15 years of experience working in different environments and with many different technologies including internal tools development, consumer focused web applications, high volume RESTful API's and integration platforms to communicate with many 10's of differing API's from SOAP web services to email-as-an-api pseudo-web services. Gareth is currently a Solutions Architect at Serverless Inc, where he helps serverless customers planning on building solutions using the Serverless framework as well as Developer advocacy for new developers discovering serverless application development.
- Twitter: @garethmcc
- LinkedIn: linkedin.com/in/garethmcc
- Portfolio: gareth.mccumskey.com
- Blog Posts: serverless.com/author/garethmccumskey/
Watch this episode on YouTube: https://youtu.be/Q3tbdlHH0Mg
Jeremy: Hi everyone. I'm Jeremy Daly and you're listening to Serverless Chats. This week I'm chatting with Gareth McCumskey. Hey Gareth, thanks for joining me.
Gareth: Thanks so much for having me Jeremy.
Jeremy: So you are a solutions architect at Serverless Inc. So why don't you tell the listeners a little bit about your background and what you do as a Solutions Architect?
Gareth: Sure. So, going back a bit, I mean I've been a web developer for a few years now coming up to 15 years. It doesn't feel quite as long as that, and I actually started back in the days of building a PHP web frameworks and so on. And my first start with serverless was back in 2016 where I actually, was taking over the lead of a team at the time.
And part of my job there was to try and help modernize this aging WordPress monolith that had been the company's entire online presence at the time. And the company, they sold tours online. And online was the only way that they sold their product. So it was quite important to have this product working well. And then I was going through the usual steps, just taking a look at how we could potentially modernize things, looking at the Laravels and Symfonys of the time.
And I was chatting to one of the guys at Parallax Consulting who had helped this company set everything up on AWS. Get all the VMs up and running in the load balances and so on. And one of them suggested that I take a look at the serverless thing that one of their team had spotted. So I thought, well, let me give it a try. Let me give it a... We see what this thing is.
And that really ended up being my road down into serverless, because the moment I picked serverless up and started looking at potentially building a RESTful API out of serverless to help modernize the architecture for the company, that was me. I was down the road and started building a POC. And the POC we had was just to take one small portion of the existing stack and replace it with something completely based off of serverless.
Something that received reasonably high traffic that wasn't super critical for the running of the organization. So if it failed, it wasn't a train smash. But if it succeeded, it would give us a great indicator that this was something we could definitely move forward with in the future. And ultimately the POC was a raging success.
Everybody in the organization was incredibly impressed with how well this serverless tech that we built. And to be perfectly honest, it wasn't even the best architect serverless tech in the world, but it still performed incredibly well, which was quite impressive at the time. So yeah, we were really happy with that. That essentially solidified serverless for me and the way forward for me in the future.
Jeremy: Awesome. And so then you started working at Serverless Inc as a solutions architect. So what are you doing there now?
Gareth: So now I'm involved with the growth team and being a startup the roles are quite mixed, so I'm called a solutions architect. But I end up doing a lot of different things. One of my main roles is involved in support of our paid product. Serverless Framework Pro dashboard, I help users who are using our product and helping them deploy it and set things up. We have a number of users who need support and help and assistance in setting up their serverless architectures and designing those sort of architectures around their use cases.
That's a really interesting job where you get to see quite a variety of ways that organizations are using the Serverless Framework. And it also means I'm working on content all the time, so I'm writing blog posts, producing videos, talking to the community, doing talks, all the usual sort of developer relations side of things as well. Keeps me quite busy.
Jeremy: Awesome. All right. Well, so since you are so deep into this stuff now, and again, you're working with a bunch of different clients with Serverless Inc, you're writing these blog posts, you've been doing this for quite some time now. I mean, I think you started what? Around 2016 or so working on serverless. Is that about right?
Gareth: Yeah, I started in 2016 building serverless applications for the first time, and last year I joined Serverless Inc themselves. Yeah.
Jeremy: Right. So in terms of experience with serverless, you probably have the most amount of experience you can possibly get, right? Because this is such a new thing. So you've been doing it for a while, you've been seeing all these different things. And one of the things I think is really interesting for people to be able to see and especially people who are new to serverless is this idea of what are the use cases that you can solve with it. Right?
And it's funny, if you're familiar with James Beswick, he has this sort of joke that he used to do it and in one of his presentations where he thought serverless was just for converting images to thumbnails. Like that was sort of like a very popular use case way back, when this first started to become a thing. And obviously you see a lot of things like web APIs and some of this other stuff.
But I'd love to talk to you about that today. Because I think there are a broad range of serverless use cases. And I'm probably in the camp of you can basically do anything you want with serverless. There may be a few exceptions here and there. But maybe you can just give us a... What do you see as like the most popular serverless use case?
Gareth: All right. Now by far the most popular use case is using serverless to build APIs. Whether that'd be a RESTful API or even a GraphQL API. And that's hands down the most common use case at the moment. And I think that was primarily pushed by the fact that API Gateway is actually such a great technology to use for building APIs, specifically RESTful APIs because it just takes away so much of that headache of trying to manage where web servers, load balancing them, a whole bunch of features that it includes to help you build your APIs, including things like JSON Schema requests Syntax, API keys that you can use to throttle users on your APIs and a bunch of others.
I mean, it's an amazing technology and then you combine that with the power of something like Lambda in the backend that you can use to receive these requests, process them and glue all the other managed services that you may need like DynamoDBs and so on. And you have a very, very solid wrestle API backend that you can very easily use. And then when you combine that with something like the Jamstack, which is a relative... It's odd how this relatively new phenomenon that's coming out is essentially a regurgitation of an old phenomenon that we used to do in the old days.
Jeremy: Awesome. So let's dive into the HTTP or the API use case a little bit more because just so if people are listening to this, I'd love when we can kind of teach things on this podcast and we can get people to sort of just understand or make it click. Right? And so you talk about an API Gateway, we talk about Lambda as the backend. Maybe just explain exactly what you mean by, what is API Gateway and then how does Lambda really tie into that?
Gareth: So API Gateway essentially is, AWS is a solution to give you endpoints. So you need some way to expose an HTTP endpoint to any client. And in this case, when I'm talking about Jamstack, I'm talking about a web client and a browser, but it doesn't necessarily just have to be a web browser either. It can be a mobile application. And that's another very common use case we see, where web APIs are reused in a mobile application to provide data to a web app client. And API Gateway is the front facing feature that allows you to receive data from your users.
So if you think of a front end that's using React or Vue or any of these jobs for frameworks, it's going to send a request to the same point, be it a GET, POST, PUT, DELETE whatever it might be to help manage that data. And in that way, it's building, it's hydrating a UI off of this API Gateway backend that you built. And I think gateway is essentially a replacement for what you would normally traditionally knows as roots in your web framework.
If you've used any of these MVC style web frameworks, you'd have a roots configuration that you'd apply, that would then point to a controller, potentially with some actions in them that then handles those requests. And API Gateway essentially removes all of that work for you. You just need to configure a path pointed at a specific Lambda function and your code then receives an event object from API Gateway that contains all the details you need for this request.
Including everything from, your headers that have been received, a few added in by API Gateway to help you make some analysis on your request, potentially, to the body content that's being sent. If this is a post request, for example. So it's essentially it replicates the effect of having an HTTP request come through any old web server like an Apache or an Nginx. But without any of that concern about configuring this very complicated piece of technology on an EC2 instance that you could mis-configure which I've never done, I promise.
Jeremy: Or not secure properly.
Gareth: Yeah, exactly. So yeah, just combining those two really makes it incredibly powerful. Especially because when you look at Lambda. Lambda is an event driven way to run code. So Lambda by itself is kind of useful, but if you drop something like API Gateway in front of it that can trigger the Lambda function and pass data, you now have a match made in heaven essentially, something that could receive your data and then process it in real time.
Jeremy: Right? And so, I mean, that's the thing that's powerful about Lambda, right? And so Lambda is this one part of serverless Lambda functions-as-a-service if people are familiar with that term. And that essentially allows you to run code, whatever it is. And like you said, in response to events. But API Gateway is one of those really cool tools. Right now, there's a new version of it or there's the REST APIs, which is the existing version.
There's the HTTP APIs, which we talked about on the show a couple of weeks ago. But what's cool about the REST version, and this is something that will be coming to the other version as well, is this idea of service integrations, right? So Lambda functions are great and they can do all kinds of processing. But maybe explain, "Hey, maybe I don't want to use a Lambda function, maybe I don't need to use a Lambda function." What can you do with service integrations?
Gareth: Well, one of the really interesting things is if you have a client application that's making API requests, sometimes you want that response from the actual request to come back really quickly to the client side, because you've set things up in your infrastructure in a way that you know the request is going to get processed eventually. And this is the idea of asynchronous processing. So you make the request, the AWS or API Gateway essentially comes back and sends 200 success, don't worry about it. We've got this now. And in the back, what you've done is you set up this integration with another service, like an SNS or an SQS or something like that, that receives that data from API Gateway.
So instead of directly diving into a Lambda function, which becomes asynchronous request, this means your client then has to sit there and wait for the Lambda function to complete execution and return a success or a failure code. API gateway can immediately send us data into a service integration of some kind. And that data will eventually get processed, an optimistic style of coding your client, your frontend, and so on. It's a very, very, very useful and efficient way to handle your API requests, so you don't have to keep diving into Lambda.
And then you also, you don't end up having the additional cost associated with the Lambda function. Because then the functions do build unit, for every 100 milliseconds of execution time. And in the past, if you had just been taking the event data from an API Gateway request and dropping it into SNS or SQS service anyway, well let's just saves you having to do that in the first place.
Jeremy: Right? And a lot of those good use cases around something that you would use an HTTP endpoint for, that would be perfect for SQS or maybe Kinesis or something like that, would be like the webhook. Right? You know, you've got a lot of data coming in. You don't need to respond to the webhook with anything that says, I've done all the processing. You just need to say I've captured the data.
So that's where I think you get a lot of really useful benefit out of using something like that asynchronous pattern. Like you said, storing that data, throwing it into SQS, responding to the client immediately and then worrying about processing that data later on down the line.
Gareth: Well, this is one of those interesting situations where... It's one of those things that didn't click for me for a while, that you have this powerful asynchronous processing capability available to you in AWS. Because traditionally if you're building a web application, you receive a request, you process things synchronously, you put things in the database, you handle those things and you return a response. Maybe you'll drop something in a separate queue.
But generally things are done synchronously. Whereas with AWS, you can run things completely asynchronously if you wish. You can drop things into SNS queues, EventBridge, all sorts of different services. And that means that in the background, things are processing while your latency sensitive applications on the frontend, for example, are complete and it gives a great user experience. So it's a very interesting way to build an architecture.
Jeremy: Right? And another thing that you see that's becoming very popular and I think this probably started with the idea of building mobile applications and trying to minimize the amount of data that you're passing back and forth, is this idea of GraphQL. Right? And the benefit of GraphQL is the client can make those requests and they can just request the data that they need.
So that way they're not over fetching data. But also they can combine multiple bits of data together so that they're not under-fetching data either and need to make multiple calls. So API Gateway is great. You can actually build a GraphQL server with Lambda and API Gateway if you wanted to, "server" in quotes. But there's actually a service for that called AppSync. So can you just explain sort of what that does?
Gareth: Well, AppSync is a great tool to take away the headache of managing your own GraphQL server, which is no small feat. It can become quite a hairy situation to do that yourself. But AppSync essentially lets you tie all sorts of different resolvers to your GraphQL query. So you could link one portion of your GraphQL query to a Lambda function, which will then go off and do some interrogation. They'd be inspecting these three buckets build a data model and return that to your AppSync bottle.
You might talk straight to a DynamoDB table and pull some data in that way. You may even just grab items right out of an S3 bucket as part of your GraphQL query. And again, this gives you the power of an asynchronous feature where you're not necessarily incurring that latency involved in running code all the time. AppSync is always available. And just like a lot of the other services, it's pay-per request so you don't have a VM or a container sitting idle at 2:00 AM in the morning when there's no customers.
AppSync is really in waiting to receive a request and only bills when there's actual requests and only executes in your architecture with there's actual requests coming in which is pretty useful as well.
Jeremy: Right? Yeah. And the thing that's nice about AppSync is that it is just massively scalable. The throughput on it is insane. I mean, the same thing with API Gateway where you might be setting up load balancers in the past, right? Even if you're using elastic load balancers or application load balancers, you still have to worry about what it's hitting underneath. And if you connect these things right with serverless, that AppSync or I should say that GraphQL use case or that HTTP API use case is just massively scalable and it's very great.
So the other sort of, I guess common use case that we see is this idea of real time communication and AppSync has a way of doing sinking. It does offline sinking and some of that stuff, it's kind of built in. There's a whole new data store thing that they built, which is really cool. But I think a lot of people are more familiar with WebSockets. Right? So that's another thing we can do with serverless now.
Gareth: Yeah. And I've actually worked with an organization to help build out a eCommerce product using serverless WebSockets and the biggest advantage that WebSockets gives you is maintaining those connections to clients. And that really becomes one of the trickier things to do if you have to manage a WebSocket set of infrastructure yourself. But with WebSockets in API Gateway essentially part of API Gateway V2, AWS calls it. You have WebSockets available to you.
And the product that we ended up building was essentially a product counter on a frontend that as a user is sitting watching a screen, you just see a counter counting down as items are sold. And this is this was part of a larger Magenta backend again. So you have a data store storing the quantity of products in your warehouse and as items are sold, it's updating through a sequence of Kinesis dropping into DynamoDB table, which can then trigger your Lambda function again.
So this is all part of that asynchronous side of the things that I was talking about. Again you have a WebSocket connection that's set up by a client when they connect to a page. So they do a product page. The website and connection is created to your WebSocket backend that you create through API Gateway for example. And in the background you have Kinesis receiving data, but items that are sold that are triggering, that have been installed in DynamoDB.
That DynamoDB is using streams to trigger a Lambda function which can then go and look at the current existing active connections on their product page and send that data to the frontend. And the frontend then receives the data and can update the DOM with the actual value of products available. And WebSockets are great for that kind of use case because you're not dependent on a 100% perfect liability.
WebSockets do have a reputation of not being perfectly reliable. But if you need to give people and a rough estimate of the amount of products available in a product page, it's a perfect use case. It just gives that kind of nice solid feedback to the user that they're somewhere useful that they want to potentially maybe they want to buy because they can see the product running out or whatever it might be.
Jeremy: Right. Yeah. And obviously things like real time chats and anything where you want to be able to push data back and forth, multiplayer online games. I mean there's all kinds of crazy things you could do with WebSockets, but I think something that probably confuses a lot of people when they look at the WebSocket piece of things, from a serverless perspective anyways, is the fact that Lambda functions, if people are familiar with how they work, they are stateless, right?
So a WebSocket creates a connection, a long polling connection in a sense, and keeps that connection open and remembers who it is that's connected to it. But you're not running a Lambda function in the background that just sitting there and waiting. So can you explain how, because it is through API Gateway, but how API Gateway handles those long-lived connections while still using a femoral compute with Lambda?
Gareth: So WebSockets on API Gateway. So essentially API Gateway manages that long-lived connection somehow with the client. To be perfectly honest, I don't know the integral details of how AWS have configured this and personally I don't really care. It works. It does the job and that's the point of serverless as well. I don't want to have to worry about that undifferentiated heavy lifting of running a WebSocket service.
They do that really well. But how this works from an implementation point of view if you're building it yourself is that you essentially configure your WebSocket connection almost exactly the same way as you do a regular RESTful API with the serverless framework for example. And when that user connects to the WebSocket connection through API Gateway. You can set a specific Lambda handler to be triggered on a connection event with a WebSocket.
And this is useful because you can set your Lambda to receive these connection attempts. Which gives you a unique client ID and this is negotiated between the browser and API Gateway itself. So you don't have to worry about the details of it. You get essentially a UUID of that user's connection. And at that point you now have a unique reference if you do need to communicate back with them.
But as you said Lambdas are stateless, so we can't just use a session token or anything like that. So one easy use for that is DynamoDB, which if anyone's not familiar with it, it's a fantastic key value data store, that is incredibly useful in the serverless context and because of its low latency and high throughput, DynamoDB is fantastic for storing these UUIDs. Essentially because the only thing you're storing is a UUID and potentially the location that the person connected from, so you have some way to refer back to where they've connected.
And at that point either it sets up the connection, triggers your Lambda function that gets triggered on connect with the UUID and the location of the page the person's viewing. And you can just log that into DynamoDB as is. When you have a product information updates come along, you can see that a specific product has a stock level change, you know the location in your frontend where this product is viewed, what the page URL is, what the path is.
There's none of the concerns about the load again, because all of the infrastructure we've used behind the scenes is completely load balanced because of how AWS manages this for us. We're not dependent on any server-full infrastructure that might need load balancing that might run out of capacity because DynamoDB is an absolute a monster when it comes to providing you capacity.
API Gateway itself, as we've said, has all the capacity you might need. And most of the work then is done by the client, which updates the DOM in real time when the data comes across the WebSocket connection.
Jeremy: Right? Yeah. And I actually, one of the startups I was at, well, several years ago now, we built a real time interface. You could comment on things, comments would appear in real time. We started using long polling, right? This is constant polling, which is just a terrible, terrible idea. You'd always setting up and tearing down connections on. So we ended up installing the software, I think it was called Ape, A-P-E.
I had to modify some of the backend because it wasn't a load balanced application. So we had to make it so that it would work from a load balance standpoint. And so we had multiple EC2 servers running. We had an elastic load balancer in front of it. And we had to bounce them back and forth between those. We had to use sticky sessions. It was a nightmare. And I don't know if anybody's ever tried to build a chat application at scale, but it is a lot. It is a lot.
And so this WebSocket use case from API Gateway just... I mean I really like the way they set it up. I think it's really, really interesting. I do wish there was like a broadcast channel that you could use to maybe like broadcast to everybody that was in group A or group B, without having to run some of those things. But it is very possible and like you said, it's actually not too difficult to set up.
Gareth: One of the interesting things is when I first started building back in 2016, we needed a WebSocket-style set up as well at that time. And back then, API Gateway didn't have the WebSocket capability built in. But there was, you'd jerry-rig it with a bit of, using the IoT service, in AWS and MQTT protocols and WebSocket connections and get it working. And that was one of those situations where we went down that rabbit hole, we bought all the stuff out, it looked really great. It performed really well. And as soon as we were done, WebSockets came out with API Gateway. So that's one of those lessons you learn in serverless that the moment you want to build something yourself AWS solves the problem for you.
Jeremy: Well, what you need to do is you just need to pretend that you're building it and tell everybody you're building it and then AWS will come up with it a few months later. And then you don't actually have to build it. But, no that's...
Gareth: Tweet it a couple of times. Block it a couple of times, let AWS know.
Jeremy: Exactly. All right. So those are really great frontend use cases, I think. I mean the API Gateway obviously or the API use case you can use for internal APIs and some of that stuff as well. I think that makes a lot of sense. But there are a lot of other use cases that go beyond just maybe interacting with a website, and then this ties into this, but this could be used for other things as well, and this is this idea of clickstream data.
Gareth: Yeah. So clickstream data is an interesting one because, a lot of the time, and we find ourselves, I was working with an organization who's finding that they wanted to get more information about what users of their product were doing. And this was more a case of they weren't personalizing. So again, this was an eCommerce platform and they wanted to provide some kind of personalization. Some personalized recommendations on the platform.
And it was funny, tricky to do this because they weren't super high volume in sales. So it's difficult to pinpoint what the personalization would be, because they didn't have thousands of products that somebody would have bought and then understand what people like. So they wanted some way to determine if someone's viewing this, more often than not, they click through on this, they click on this, they select this, and this ended up with a project view sort of capturing clickstream style data.
And again, depending on your volume, and in this case the volume is reasonably high. You need a tool, you need something like a Kinesis for example, which is probably one of the least serverless serverless products that you very often find because Kinesis still has this element of you need to allocate shards or you need to allocate some form of capacity to it. But it still handles an enormous quantity of data which is pretty impressive.
And it's really good at handling this time sensitive data that gets piped in constantly through a stream, exactly as the name suggests. And that's what we were finding. We needed some way to capture a lot of data very quickly and constantly all the time. So Kinesis is a great way to manage capturing this click stream data. But you also need somewhere to store this, once you're done.
So Kinesis has a great feature called Firehose. It's actually called Kinesis Firehose, where you can capture all this clickstream data and just point at an S3 bucket and say, "Put all the data there." Instead of trying to find ways to process the data, once it's in Kinesis. And this prevents you from having to spend a lot of time and effort on Lambda or any other compute platform processing vast quantities of clickstream data, but you still want to capture this and store it somewhere.
And then what this helps with is there are other services for example that you can use. I think services like Glue and Athena, two completely unrelated names that work together. AWS naming scheme hard at work. But Glue and Athena work really well together because Glue allows you to do things like introspect the format and the structure of your data, in a way that it can pass that to Athena as if it's a SQL table or SQL database. And lets you run SQL queries essentially on top of data just sitting in an S3 bucket which is incredibly useful.
And that was eventually what ended up happening, was there was a personalization engine, in the background that was receiving all of this clickstream data that was being funneled into the AWS backend, dumped into an S3 bucket and then regular Glue and Athena jobs that were running, I think it was on an hourly basis. Even at that, I was triggered by a serverless Lambda.
So you have these jobs rendering on a regular basis and with Athena running queries, you can now take useful information out of raw data and push that into any other BI platform you might have at that time, including a backend for the application itself to then start building personalization into the platform, which ended up being a pretty useful project at the time so.
Jeremy: Yeah, no, and I think that the really nice thing about this Kinesis, to S3, to Athena, and you don't even necessarily need Glue. I mean, depending on how you're writing your data. Especially with Kinesis Data Firehose you can automatically convert it. You can actually run conversions while it's processing before it puts it into the S3 bucket. But what I really love about that is essentially is a 100% serverless and you're just really paying for when you run the queries, obviously you're paying for the shards and stuff like that for Kinesis.
But you're not paying to just store this ton of data in an expensive storage place. I mean, S3 is relatively inexpensive for that amount of data. If you were to use something like Elasticsearch, which is a popular analytics tool and so forth. We tried that at one point and we were just doing some of the projections with the number of clicks we were collecting per day. And it would've just kept on adding more and more data, more and more data.
We had to keep making the drives bigger and bigger and bigger in order to handle all this data. And we were thinking about, well, maybe aging some things out, doing some roll-ups, aggregations, and that's all possible. And you could still do that with S3 as well. But I really, really love that use case because, that is such an important thing now is capturing that data to understand what your users are doing on your site.
And whether it's for personalization or whether it's for other types of optimization, or you're collecting clickstream data for AB testing or any of that sort of stuff. It is just a really, really good use case. And I think with the tools in place now serverless just handles it so well.
Gareth: And the interesting thing is we actually... To start off with, we looked at two models and we went with the Kinesis model, but we even investigated using just API Gateway with a Lambda function, dropping items into S3 as one potential method to handle this. Just because that was what were familiar with at the time. And that works and the scale that you can get with that is pretty impressive.
Kinesis just ends up being a more performant and a cheaper way to run these kinds of operations. But if API Gateway and Lambda is something you need as well, it can still handle these kinds of clickstream events. Just there are often specific tools made for a specific job, which is often a good one to go with.
Jeremy: Right. All right. Again, clickstream sort of does fit into that frontend piece. But so as we move past this, one of the things that I know I've seen quite a bit of is people using just the power of Lambda compute, to do things right? And what's really cool about Lambda is Lambda has a single concurrency model, meaning that every time a Lambda function spins up, it will only handle a request from one user. If that request ends, it reuses warm containers and things like that.
But, if you have a thousand concurrent users, it spins up a thousand concurrent containers. But you can use that not just to process requests from let's say frontend WebSocket or something like that. You can use that to actually run just parallel processing or parallel compute.
Gareth: Yeah. This is one of what do they call it, the Lambda supercomputer.
Gareth: You can get an enormous amount of parallel... Try to say that three times quickly. Parallelization with Lambda...
ON THE NEXT EPISODE, I CONTINUE MY CHAT WITH GARETH MCCUMSKEY...
THIS EPISODE IS SPONSORED BY: AWS (Optimizing Lambda Performance for Your Serverless Applications)
What is Serverless Chats?
Serverless Chats is a podcast that geeks out on everything serverless. Each week, Jeremy Daly chats with another serverless champion to explore and do a deep-dive into specific topics in the serverless space.