Serverless Chats

Jeremy chats with Sheen Brisals about the problems the LEGO Group was trying to solve with serverless, what they learned from their journey, and what's the idea behind "functionless."

Show Notes

About Sheen Brisals

Sheen is an experienced software engineer, solutions architect and a team coach, currently at LEGO architecting serverless solutions. Previously as a principal engineer, tech lead and development manager with leading organizations such as Oracle Corporation, Hewlett-Packard, Omron, TATA, BAe and others. Sheen is a regular speaker at tech gatherings. He is a keen participant at Serverless meetups, AWS conferences, Serverless Days and others.

Transcript

Jeremy:
Hi, everyone. I'm Jeremy Daly and you're listening to Serverless Chats. This week I'm chatting with Sheen Brisals. Hi Sheen. Thanks for joining me.

Sheen: Hey, Jeremy. Pleasure to be here. Thanks for having me.

Jeremy: So you are a Senior Application Engineer at the Lego Group. So why don't you tell the listeners a bit about yourself and what you do at Lego.

Sheen: Right. So, yes, I am a Senior Engineer at Lego. I joined Lego three years ago. And as part of my role with Lego, I I act as a team lead; I act as an architect and also I coach fellow engineers on their career progression. So in terms of my career, I started way back in the nineties as a software engineer. I’ve been through quite a few organizations - both big and small - been involved with number off software development projects all the way through. So I joined Lego at a juncture where when they were thinking of moving to microservices so that's why I came on board with Lego, around three, three and a half years ago.

Jeremy: Awesome. Alright, so you are like jet-setting around the world telling the story of how Lego.com went serverless and I've seen you speak at a number of conferences and I think it's a huge service that you are doing for the serverless community sharing this because it is important, I think, for teams to see how other companies are doing it and how other people are implementing these things because it's very new. Serverless is very new, and even though it's been around for five years at this point, there are still a lot — there’s a long way to go. So I want to talk to you about this idea of the serverless journey at Lego.com. So let's start sort of where are you now? Where is Lego now with serverless?

Sheen: So within Lego, there are different teams or departments embracing serverless. So I am with the team that focuses on shopper technology, shopper engagement technology, that includes Lego’s ecommerce platform and all the toolings around that. So we are progressing well with the serverless approach, so as you know that we migrated the ecommerce platform onto serverless. So we didn't stop here, but our journey continues beyond that because now there are new issues, new developments coming up because we see the benefit of serverless that gives in terms of the speed or the, you know, velocity with which we can bring out new features to customers.

Jeremy: Great. Alright, so let's go back to the beginning, because this is one of those interesting things where I think companies get to this point where they're either starting their cloud adoption or they're running on legacy hardware, and they say, “Okay, we need to now make this move.” And a lot of people go down that container route, but it was a little bit different, so let's start with where you guys were. Where were you a couple of years ago when you came in there? What was the technology?

Sheen: So that time, all on-prem. And so we had Oracle ATG, an old version of Oracle ATG ecommerce platform hosted on-prem and we had an Oracle database talking to, the platform itself talking to a bunch of other services within Lego. And at some point, there was an initiative that happened to make it more API-based and even that time they first APIs around. But still, it was on-prem and a monolith, but the front-end moved on from being a JSP onto a JavaScript based, hosted on Elastic Beanstalk from AWS. That's pretty much what we had two years ago out in that time frame. So we had, you know, they did a number of issues associated with the typical monolith platform maintaining or releasing our new features and fixes and things like that. So that was pretty much the landscape we had at that time.

Jeremy: And then you had sort of a “come to Jesus” moment on Black Friday, right?

Sheen: Yes. So yes. So there are different things. So though I focus on that particular incident, there were one or two other thoughts that were going on. So, first one is the ecommerce platform itself was aging and very old. So we had to move on and then Lego, as a company, wanted to reach out to many children around the world. So that means they need the platform to go out and launch the shop and the availability of the bricks and everything to children around the world. So that was — from the business side of things — that was a need. So we need a platform that can provide us that capability. At the same time, we wanted to migrate from the old platform, so we were not thinking of serverless. So a typical microservice, put everything in a Docker, or, you know, containerize an instance-base — those were the different ideas floating around. Then came this Black Friday. And we had this catastrophic failure of the platform so that triggered some of the conversations internally to get to a point that we must break up things. We can't have everything together as a one-piece. So we wanted to make sure that, you know, we don't fail just like that. If something fails there are stuff the platform should be able to carry on. That's where things got ignited I would say and started the business and the engineering teams started to discuss and come up with proposals and ideas. So serverless was very much new at that time, and no one had any experience or exposure within the team I belonged. So I went out to office while looking at AWS and I'm talking to people, attending different conferences and meet-ups and things like that to gather the idea. Still, not to say where to take the leap to. So that was, at that time, when we had that Black Friday failure. So from then on, based on the technology improvements and the Cloud initiate deals and the organization-wide that the need for our digital transformation initiate too. So all those things came together for us to, you know, take on this serverless journey.

Jeremy: Did you already decide to move to the cloud? Like, basically, you said on-prem is not working for us or we need to be able to scale more. So you decided to move to the cloud and you said you looked at containers. But what was the thing about serverless though, that you said no, we definitely have to go this route?

Sheen: Okay, so there was another factor. So as part of the platform migration or the upgrades, there were a bunch of things that were looked at so either have a platform similar to what we had and everything available within a part of the platform or go the other way around. Look for simple, headless, API-based platform and then we can put our logic around it so we have the freedom to innovate, scale, bring in new features and the all other capabilities from our side. So that was kind of the discussion point. Then he chose that, okay, we need the flexibility because we don't want to be constrained by some of the commerce platforms out there. So we wanted the freedom to innovate, freedom to bring on the features the way that we wanted to bring out to the customers. That was a main shift. That's when we started looking at cloud as a, well, you know, enabler for us. So and, then, with all that, you know, the availability, HA and the scalability and all the things and came along with, you know, the Cloud thinking. So that's sort of the position point where we started to focus on AWS Cloud and then came the serverless mindset.

Jeremy: Nice. Alright, so now that you've made this decision. You’ve said, “Okay, we're moving to the cloud. We're going to go ahead and take the serverless-first approach.” So what was the process for getting started? Because obviously, moving to the cloud is a big step for a lot of companies. There's security. There's understanding, you know, just the ecosystem and what's available. So what did that process look like for Lego?

Sheen: Okay, so at that point, so you know, the Black Friday failure happened and that gave us the opportunity to try out something simple when we wanted to decouple small part of our system. And that's where we introduced serverless, because not knowing anything about serverless, we didn't want to take a huge risk. So we said, “Okay, let's try this out. If this works, we can take it further. If not, we can change course and something else.” And for us, it worked, and from there what happened was that we started to realize the potential that serverless can provide us. So slowly the different managed services became familiar to us. So the S3, DynamoDB and SQS and all the different managed services that become part of the serverless ecosystem kind of became familiar. So then we were able to see the opportunities that that would bring us going forward. So that's kind of the, you know, the time we started. So based on that, then we put together the sort of the guard rails, if I may say, so that when the team starts focusing on serverless, we won’t kind of debate and deviate from where we wanted to go. So that's when we said “Okay, let's go serverless and let’s use the managed services where available so that we don't need to bring anything else and reinvent the wheel.” And then, “Let’s use AWS Cloud, because lots of people use AWS Cloud and all the, you know, the services and features that we looked at, they’re all there and all that. You know, so there was a great community around it, especially and then there are plenty of meet-ups, and the talks, and the user groups are awesome. So that’s sort of the initial principles that we put together to move us forward to the next stage.

Jeremy: So before we move on I want to go back to what you said earlier about sort of taking a small chunk of the application or biting off a small piece and testing that. I think that is an awesome way to get started with serverless, especially for large organizations, especially when there is potential — disagreement is the wrong word — but maybe there's not quite the amount of confidence from upper management. So is that something that the engineering group or the management of the engineering group saw or you were able to do these small experiments and have successes and that was sort of what built that confidence for you to go entirely serverless or to really push the serverless mindset there?

Sheen: Yes, that's a very, very good and important point because when we often look at a monolith, we often get confused. Where do we make a start? Because it looks everything big. But the thing is, we need to start looking more closely part-by-part. So then we will be able to identify some small entry point into the system that will give us the comfort to, you know, try out something new. And also, when you have, when you work in the organizations, you need to prove or showcase these things to stay stakeholders, you know, to get their buy-in. So for that, it's important that we identify a part of the system that is not complicated, small enough that we can experiment with the new ideas and show them, show them the proof that it's working and that’s feasible for us to go forward to everyone around — not just the engineering team. Bring the business stakeholders, everyone together and yeah, so that's very crucial when, especially when we start this monolith to microservices serverless journey.

Jeremy: Yeah, I love that point because again, it's such a good way to do it, you know, and whether it's just peeling off the email sending component of your system or something like that. It is such a useful way, and then it's easy to have those early successes and prove out, especially if you're trying to get adoption. Alright, so you talked about some principles or some guardrails that you sort of formed around serverless. And so what were those? But were those sort of about this idea of using services or managed service when they were there? Or was it also sort of just coding styles, and you know, fat Lambdas versus, you know, single purpose Lambdas — things like that? What were those guardrails you put in place?

Sheen: So the initial guardrails were all kind of high level. So managed services I talked about, and the tooling, language preference, testing, those kind of things because those, I mean, if you think of the lean or the fat Lambdas, well that was too early to think of all those things. Those things come as part of experience. So initially when we start, we can set up these sort of toolings and all the different frameworks and etc., because often you know, when you really have a bunch of engineers around, there are always different preferences and choices. So it's important that you respect your whatever is happening outside. What do we have, say, for example, in our case, we could have argued between Golang and JavaScript forever. But when we looked at the skillset we already had, JavaScript was the obvious choice. Then they started looking at okay, so where is the, you know, tradeoff between these two or their different language or tools. So then you realize, okay, it isn't that bad. So you're not making a — you know, [you’re not] completely out of choice or anything. You're still within the preferred toolings. Similarly, you know, the framework we choose, we want it to get going faster. So what gives us that capability to move forward faster? So these were the initial sort of guardrails and the principles that we set in place. Yeah.

Jeremy: So once you had that, then you started sort of forming some teams around these different components you needed to build, right?

Sheen: Yeah. So we went away from this typical full-stack setup at that point, simply because we didn't have enough AWS or serverless knowledge that we can distribute across different squads. So that was an important position being made which paid off really well for us because we pulled the AWS skills and serverless skills in one team. We started out with two or three engineers with some of the knowledge, then slowly grew the team or the squad. So that was one of the best things that we did at that stage. Because then that particular team was able to come up with the required serverless services implementation-wise, and then the other teams are they’re looking at the GraphQL layer or the front-end. So they were able to consume these services, and then, you know, what among them to take this forward to the next level. So that was one of the, you know, the best things that we did at that stage and then when we kind of migrated the shop, he had, like 10 or 12 engineers focusing purely on AWS around technologies.

Jeremy: So how did you make that shift sort of from a DevOps standpoint, right? So you had a lot of stuff on-prem. You're migrating things into the cloud, obviously with serverless where, you know, the engineers are probably putting the IAM permissions, you know, as part of the deployment there, you want to automate some of that stuff. And I know you did quite a bit of automation right from the beginning, which was, I think is the advice that I would give to everybody, like just take a few, take a few steps or a few ticks and and and figure out how to do the automation piece because you don't want to configure anything manually when you're launching something to production. But so sort of what was that process for Lego, you know, when your team was thinking about automation and DevOps? How did you make that sort of responsibility shift?

Sheen: Okay, so that's partially from the previous experience. So we had in the traditional way we have it, we have a different team looking at all the infra and doing all the infra instead of coding and everything and engineering team had nothing to do with that. But with this one, when we started this journey, we wanted to have that infra team as part of the engineering team. So that happened. But still we were in a dilemma how to split the responsibility, where the engineering team is going to be responsible for this thing or infra team. Now initially we started out with the infra team, then soon that became a bottleneck for us because the engineering team was able to come up with the service implementation faster, whereas it relied on an engineer to set up all the, you know, scripting and everything else for the department. So we took some time off and we discussed and said, “Okay, let's kind of merge these responsibilities.” So an infra engineer is a specialist, they will still have whole certain areas that they will work on whereas  the day-to-day, the service implementation and delivery kind of things, the engineering team will start incorporating the scripting so that we don't have this sort of, the blockage progressing further in terms of the delivery of the services. So these things won't happen overnight. So if you don't have this previous serverless experience so this will be part of the new learning. So that's fine. So that's how the teams learn and move forward. As long as we are flexible enough to understand and take the approach appropriately, that should be fine. There is no reason why we should have, you know, we can't do that. We have to have this team doing that. And the engineering — no. If we have that mindset, then won’t, you know, progress fast enough.

Jeremy: I definitely agree. I think that when you're building out serverless teams, if you have a few people that are working on sort of that cross-functional team where you've got people who understand security, people who understand the development, and depending on how big the team gets, you know, if you have people who are dedicated to DynamoDB or Kinesis or any of these other things or can focus some of their skills on that, I think that is a huge advantage when you when you kind of put that together. Alright, so the other cool thing you mentioned and maybe you could explain this because I think it's probably a relatively sort of common-type practice, but you call it “solution detailing.” Can you talk about that a little bit?

Sheen: Yeah, sure. Yeah. So as I mentioned when we started, we didn’t have enough AWS or serverless knowledge. So I was at AWS at that time. I had maybe one or two engineers coming on with AWS skills. So in order for us to gain the momentum in terms of engineering the services faster. Someone had to provide the details to the engineers, because imagine that engineer coming on board with the JavaScript knowledge with no or limited AWS skills. So that means we need some way of enabling them to use their, you know, the programming skills, coupled with the serverless, AWS skills. So that's where the solution detailing helps. So people like me would take a small part of a system that we're going to architect. Then we'll put the architectural picture or diagrams in there, and will go on detailing each service level. Say, for example, I would put okay, so this isn't an S3 bucket or put in the diagram. This package is going to be, you know, doing these things. We’ll have these sort of human triggers for these Lambdas and it will have this life cycle policies, so we don't need to keep it beyond certain duration or days. Those sort of things will go as part of the solution detail. So what that what does is, for the engineers, they can go through that, especially when they are new, they know exactly what needs to be done then that will raise discussions or questions. So obviously that would be a collaborative effort between me or someone else doing the solution detailing with the engineer or engineering team. So that way, the knowledge slowly gets transferred to them as well. So they get familiar with these terms and technologies, especially, you talked about the IAM permissions, that's an important point, because when someone starts new on AWS, they wouldn't have a clue of what we're talking about, the IAM permissions, it's important that the senior engineers or the architects explain [to] them: this is the reason, and show them these other ways you need to do and why it’s a best practice and all other things. So for those reasons, this solution detailing helped a lot. One additional thing I do is that when I finish the solution detailing, it is like a Confluence space that I give to the engineering team or engineer who will be working on and say that, “Now you own it. If you make any changes, you don't need to talk to me unless you get change the entire architecture,” for example. That probably won't happen. So small changes you carry on. That's your document you keep up-to-date, and then you know, go along with it. So that then becomes kind of a reference for teams when they do the testing and, you know, the quality QA teams, they keep up that. You know, exactly, you know what’s expected and what is there so.

Jeremy: Yeah, and I really like that approach. I know for me, you know, I've been living and breathing serverless for I don't know what, four years now, or something like that. And every time, I mean because things move quickly, new services become available. There's new ways to do things; there’s new leading practices that emerge and you'll be there and you’ll be working on something. And maybe you have it in your head. You know what you want to do. You're doing some TTLs in Dynamo or you're going to make this update. Whatever it is, I know I do that — I'm in the middle of writing something and then I question whether or not this is the right way to do it. And is there a better way? And so I think putting together sort of a proof of concept or, you know, the idea of having this solution detail or whatever and being able to share that too, and kind of iterate on it a little bit before you kind of commit because that is one thing that I really like about serverless. I find myself spending 80-90% of my time thinking about how I want to solve the problem and about 10-20% of my time actually writing the code that does the solution. So it is really good to think those things through first and, as you said, for new engineers, if you think that stuff through, you know, writing code is writing code and if you've outlined that underlying architecture, I think that's a great approach so that will certainly speed up the learning process for people new to serverless.

Sheen: And also just one more thing to add. Also, it kind of, sometimes to some ideas, new ideas say, for example, recently a few weeks ago, I did a solution detailing for a particular feature, and in there, I opted for the on-demand Dynamo backup. So I said, “We need to have a scheduler, we need to back up, you know, this many times a day,” because that was the need. There's no need for the automated backup for that particular data. Then the engineer started implementing and one of his colleagues, he kind of got into the discussion, said, “Why can't we use an existing thing? Where can we use automated backup?” So I explained that, you know, there is no need for the automated backup. You don’t need to, you know, keep 30 or 35 days backup or anything? This is a simple service and the data isn’t that important. Then the sort of discussion continued. Why don't we have anything available as part of AWS or something? So that triggered and I realized that, of course, there is AWS backup that supports DynamoDB. So we don't need this sort of, you know, our own scheduler doing this —AWS backup does it. So that's sort of, you know, ideas just born out of, started off from the solution detailing, then the discussion followed up, then led to a better approach.

Jeremy: Yeah, definitely. That's a great idea. And I think the distinction, too, to make — this isn't like waterfall development, right? We're not designing out the entire application and specifying exactly how everything works. We're just basically pinning down or you're outlining a good architecture, but the implementation details would still be up to your engineers.

Sheen: Yeah.

Jeremy: It's great. Okay, So if I remember correctly from your talk or one of your talks, you had a bit of an awkward start with CICD, right?

Sheen: Yes. That is, I wouldn’t say it’s perfect. It's still a work-in-progress because, you know, the time when we started, we were in a hurry to, you know, finish something by a certain deadline. You know, though we were doing agile, we had set up, you know, a target to beat. So this is one of the areas where we had to do some sacrifice, because we didn't have enough engineers or info engineers to carry on all these things. So this is still bit off manual process and work for us. We don't have the end-to-end automated deployment pipeline. So when a PR merged, it goes through the pipeline and gets to the QA environment. So from there on to the acceptance and production is still manual. A bunch of what's happening as I speak to get it more fluent and take it, you know, all the way through. But it's not going to happen anytime soon because there are a number of different, you know, stages we need to get through and even improving the CI pipeline itself. So it is still a work-in-progress, but we are in a much, much better position than when we started last year.

Jeremy: Well, yeah, I mean, the other thing is, is that CICD and serverless just hard. It's not as straightforward as, you know, just putting something up there and then and then having your servers, you know, deploying it to the individual servers. I mean, there's so much, so much more with the infrastructure deployment and the infrastructure as code and things like that that have to happen and then, of course, you know, if you're deploying into different accounts and things like that, I had a whole episode with Forrest Brazeal about this and yeah, it's complex. So don't feel bad about it, because it's certainly not the easiest problem to solve.

Sheen: yeah, exactly. That's why I'm, you know, in my talks, and when I talk to other people in other conferences, I'm open and say that you know, it's not perfect progress. We’re still improving. Yeah, that's, you know, that's the best thing. So you should be able to continuously improve as you move forward.

Jeremy: Awesome, alright? So a lot of best practices or sort of, I guess, lessons learned came out of this journey for you and the team at Lego.com. So you want to talk about a couple of those best practices and sort of lessons learned?

Sheen: Yes, sure. So there are plenty of lessons obviously. One area is — I think I blogged about this — is about lean versus, you know, fat Lambda, how we make the choice, you know, And also, the other one is like news, the service integrations, where available and where possible so that you don't need to always write Lambda functions for everything. And so then obviously there are a number of other areas, well, say, even when you have storage, there are ways to make a decision on where you want to keep the data, or you want to remove the data. To remove the data, there are a number of ways you can do that and you don't need to, you know, clog the data there and pay for your storage. I mean, DynamoDB has, TTL has policies, and CloudWatch has it, so it's very, you know, duration. So a number of these things, I mean, these are all the lessons learned because early on, we didn’t have certain things set because no one knew these things were there, but then started to learn about, don't know about these things, so then they become best practices for us. So obviously you learn something, and then you put that into practice as one of the things So if I talk about the lean versus the fat Lambda, that again came out of our experience, especially dealing with the Lambda functions behind the checkout flow where everything needs to be fast and crisp and quick because customers otherwise will laugh at the customer experience. So we tried a couple of options. We tried, we thought of using step functions and we thought of splitting into different Lambda functions, but those approaches didn't give us the fast response that we were looking for. So that's one of the reasons, right? That, okay, we need to be open, so there is no right or wrong way. You choose the approach required for that particular situation and someone I remember, someone commented about their approach because they have functions they complete in a few milliseconds. So if they had to split in the number of different Lambdas, they will obviously pay for, you know, 100 milliseconds for every Lambda that as a single Lambda for them is less than 100 milliseconds. So there is a cost implication as well. So a number of these different things, as you become more familiar with serverless, as you gain more experience, you start to learn and then put into practice.

Jeremy: So what about what about security? What lessons came out of that

Sheen: Security is a big area. There are different ways of looking at security. So, you know, alright, we talked about the IAM permissions. So initially we had a serverless written with sort of, you know, a lifeguard? Yes. Then as part  of the code review, RR review, these things will be got out and will be put, you know, corrected  and put in right. Then we have the APIs. So we have a bunch of APIs, API Gateway endpoints and how do we secure them. So there was an issue, because we didn't have the time or the expertise to put together the client side authentication mechanism in place, so for a while, we had to go with the API key-based approach, even though that's not a recommended thing. But then we have sort of a locking down mechanism implemented in Lego so that you know, the access gets white-listed, otherwise. So that area, that is still improving. So especially the new services that we are now working with. So we have planned authentication put in place from the beginning. So we work with an incognito user pool and the scope-based authentication mechanism. So that is coming in slowly for all the new services and also will then get applied to the existing APIs as well.

Jeremy: Alright. And how did you deal with logging and tracing and monitoring?

Sheen: Yeah. So we do sort of a structure logging that kind of evolved from a simple log messages. So we have sort of a decent level of logging, you know, if you look at the logs, we are now able to trace things through. Then at one point we started, we have a monitoring system in place, so we kind of stream the logs to ElasticSearch as well as to the monitoring system, so that the structure logging, with ElasticSearch, we are able to, you know, go through and try to identify any issues, and engineers work with that. But one area that we didn't focus or we didn’t put in place was the distributed tracing side of things. So that's why I think I once stated that if you're starting your serverless journey, please you know, start with the distributed tracing and manage. I mean, you can start with XRay, or bring in a third-party tool. So that's a really cool thing that gives lots of confidence to the team. So that's kind of what we have in terms of logging. But again, this is an area that is always improving, including the tool that is the monitoring tool that we use. They also going to come up with the enhancements that will provide us some sort of capabilities. So that's also coming. And also we had, we spoke to a bunch of the distributed tooling providers as well, so they sort of are still open. So that’s an area in which we’re constantly improving

Jeremy: It sounds like one of the one of the biggest lessons you learned is you can always keep improving right. You don't have to, doesn't have to be perfect the first time around, as long as you get some of the, you know, most of the rough edges are at least from a security standpoint, and some of those things. You know, there's a lot of improvements you can make, and it's really fast to iterate, too.

Sheen: Exactly. The point is, I mean, we say serverless and Lambda functions, but as they start growing, you are then soon, a bunch of the things immediately, you know, are swamped, because you don't have the time or the liberty to sit down and make everything perfect. So you identify the most important things that will take us to deliver your most valued features. But then you slowly, you know, start improving on the areas to bring everything.

Jeremy: Absolutely. And I think that probably ties into a little bit, ties into your concept of Set Piece architecture, right? Can you explain that a little bit?

Sheen: Yes. So that term I don't know if you follow the football, it's not your football, but the English football…

Jeremy: Soccer!

Sheen: So that is this sort off play, they say often set piece play. What they mean is that they sort of, they don't play end to end to score a goal. So they send someone coming in front of the goal post on the corner kick and things like that. So then they score a goal so that they can offer for us a set piece play. And I thought. OK, so even an architecture or even building and Lego model. The concept is there. So we don't build something from one piece to the end, we kind of build things, group them into smaller modules and smaller pieces together. So I found that way of looking at the end there architectural and focusing on areas that we work with, as you know, good progress way off dealing with the complexity. So that means between the cloud and serverless, we get this sort of opportunity to practice that thing. So that means so we can come together with the quick solution implementation. Then we can kind of run it through. You don't need to wait to get to production to do that. Because all environments are the same, you have your DEV, or TEST, or PROD. They are just AWS environments, so you can basically try it a number of times in a way off in sports, say, for example, in rehearsal or you know, that sort of thing. So you you practice that again and again and make sure that that particular solution is now performing and solid production ready. All you need is just take it and deploy it to production. So that means, so I always say that we need to have the vision off the entire architecture, but we need to focus closely on a particular piece at the time. So this is how we kind of, you know, build the different areas and, you know, brought them together. Because obviously the services these days communicate with events or messages, and if API calls, so that means we don't need to tightly integrate many things together. So we have that loosely coupled or a decoupled upwards, so that we can focus and make one bit perfect before we, you know, look at the other thing. I'm slowly then bring together one by one into their architectural walls.

Jeremy: Yeah, and speaking of, sort of decoupled applications or decoupled services, so event driven and events streaming, that's something you and the team embraced quite a bit as well, right?

Sheen: Yes. Yeah. So from the beginning way, we started using the event-driven approach. We have a bunch of SQS, SNS, all sorts of things. But then the team are now looking into the amazing EventBridge. As you know, that kind of changing the landscape. So that's one of the things, as soon as they announced, we realized the value in it and the flexibly is gives so we gave it to an engineer to come up with some sort of POC sort of solution. So I worked with an engineer and asked her to, you know, do these things and explain the benefits that we gain. So here is how we filter the message and set up the routing rule and things like that and the new service, as me move forward, we are having the EventBridge as a core component. Event buses, a core component, at start up, because, as you know, that the filtering capabilities are far, far better compared to... Yeah, yes.

Jeremy: And you can trigger more services with it.

Sheen: Exactly, exactly. So, yeah, so in my patterns talk, we talk about the EventBridge as well,  because I see that, you know, many I already spoke about, it's a cool, cool service that we have.

Jeremy: Yeah, yeah, I love EventBridge. I just started using it in a real project that my latest project that I've been working on for the last couple of months incorporates it and of course they just added CloudFormation support for creating custom buses, but which is good because this project isn't live yet. So I could go back and fix that which I had to do some work arounds in order to get that to work. But so that's there now. All right, Awesome. So, listen, this is this is, I think, been super educational for for anybody who is thinking about kind of bringing serverless into their organization. But while I have you, I do want to talk to you about a post that you just put up. Called “Don't Wait for functionless, write less functions instead.” So just take a couple minutes. I'd love to hear your perspective on this. This concept of functionless.

Sheen: Functionless as a term I heard, I think, probably at one of the conferences. It's probably in Helsinki, they had a panel discussion at the end, and someone was asking what’s the next for serverless? And someone said, “Oh, it's functionless.” Obviously, at that state functionless has been used by a few. So then I started to think about it around that time I was speaking to my friend and his organization moving to serverless, and he was probably explaining a simple piece of architecture he put in place. And there was this Lambda function kind of between an SQS and an API Gateway. And he said he set up the data transporter and I asked him, what exactly it performs. He said, “Oh, no, it just takes the request payload and puts it into the queue.” And I said, “You don't need a Lambda there,” I mean, he became uncomfortable, but then I realized that some off the situation many engineers go through, especially when they come new to serverless, start writing Lambda functions. So that kind of thought process evolved. Then I realized that that a number of ways, we can reduce the use off Lambda functions. So API Gateway being one, and then obviously, we talked about the approach of fat Lambda versus slim Lambdas. And there are a number of other areas and now with EventBridge where we can avoid having custom Lambda functions written. So, you know, when we explain this to people they may say that’s just another Lambda function, why we making so much noise? But thing is, if you look down, you avoid so much of a hassle like poor maintenance, you know, security issues and everything all the integration points and pains and obviously of course, depending on the usage of that function, there is cost implication as well. So that's sort of roughly the how. I came up with this sort of idea and I thought, “Well, okay, let me collectively put everything out there, you know, a nice you know, light-hearted reading so that I'm not hitting so hard, but just conveying the message so that someone reading it will have something to think about it?

Jeremy: Sure. You know, and I love this idea, cause I think things like API Gateway service integrations, they're they're not a silver bullet. I mean, obviously, there are a lot of things you still need Lambda functions for. If you have business logic, then yes, Lambda functions are needed. But sometimes you're just moving date around or doing some transformations even, there are ways to do that without even touching Lambda. And it reduces a ton of complexity, right? It makes your architectural a little bit simpler. I do find that when you start doing some of these integrations or service integrations, that it becomes a little bit more black box and the observability isn't quite there, you know, so that can get a little bit nerve racking. So I know for me I love using Lambda because I can log everything. I know exactly what's happening, but at the same time, there's a lot of reliability and retries and resiliency built into the cloud for you already, you know, and the more you lean on that, the less sort of technical debt and overhead that you have to worry about. So...

Sheen: Yeah, exactly. So that's two things. So one is, uh so you don't need to worry about say cold starts, which is a good thing. But then you don't know exactly what happens inside. And that's an area there that AWS needs improvement. Because velocity template, you know, the struggle getting that right and making that work, that’s what needs, you know, improvement drastically. So yeah, so that that's the other side. But you feel just shifting data, and simple things from, you know, one place to another. And if there is a way not using Lambda functions, then please don’t.

Jeremy: Awesome. All right, well, I'm going to end this with something that Forrest Brazeal said at Serverlessconf, which I thought was quite clever. He said, “Most people have been saying that Serverless is Lego, and now Lego is serverless,” which I think is brilliant so good on him for that. But again, Sheen, thank you so much for being here and, you know, going all over the place telling this story and just sharing your knowledge with the serverless community. If people want to find out more about you and what's going on with LEGO.com, how do they do that?

Sheen: So obviously, I'm on Twitter. So @sheenbrisals. Also, we have the LEGO engineering blog channel on Medium, where I'm trying to encourage engineers to put more out there. That's another place and then also, I share the journey expedience around, and also there are other engineers now picking up and spreading the knowledge around so that's sort of the ways we can communicate. And, obviously, you know the tech community is growing, so there are multiple ways we can get in touch and learn from each other's experience.

Jeremy: Awesome. And I'm sure you'll be speaking at a conference near someone soon. So, again awesome, I will get all this into the show notes. Thanks again, Sheen.

Sheen: Thanks a lot, Jeremy. Pleasure to talk to you.

What is Serverless Chats?

Serverless Chats is a podcast that geeks out on everything serverless. Join Jeremy Daly and Rebecca Marshburn as they chat with a special guest each week.