Jeremy chats with Gillian Armstrong and Mark McCann about Liberty Mutual's strategy for serverless adoption, how they evangelized serverless and focused on developer enablement, and some of the successful serverless projects they've launched.
Gillian works as a Solutions Architect at Liberty Information Technologies. Her team is focused on thinking about big problems, and working out how to solve them using innovative technology in interesting new ways. At the moment she is working on Artificial Intelligence, with a particular focus on Conversational AI design and development. She has more than a decade’s worth of experience in many technologies across the full stack, and loves being a software engineer as it allows her not just to think up big ideas, but also to make them a reality.
Mark is an Architect at Liberty Information Technology that has been developing software and solutions for Liberty Mutual for nearly 20 years. He is currently working on making "Business idea to production in minutes" a reality. Mark holds several AWS Cloud Certifications and has a vast amount of experience with microservices, event-driven architecture, Docker, AWS, and other emerging cloud technologies.
- Gillian Twitter: @virtualgill
- Gillian Web: virtualgill.io
- Mark Twitter: @markmccann
- Liberty IT: liberty-it.co.uk
Jeremy: Hi, everyone. I'm Jeremy Daly, and you're listening to Serverless Chats. This week, I'm chatting with Gillian Armstrong and Mark McCann. Hi, Gillian and Mark. Thanks for being here.
Gillian: Hi. Thanks for having us.
Jeremy: So both of you work on the team at Liberty Information Technology, which is a part of Liberty Mutual Group. So Liberty Mutual, if people don't know 100-year-old insurance company, one of the largest here in the US - you have, what - 30 countries you work with, 50,000 employees, something crazy like that. So let's start with Gillian. You’re a Solutions Architect there. Why don't you give us your background, a little bit more about what you do?
Gillian: Sure. So I have worked across a lot of the areas in Liberty, including our emerging tech space, where I was first able to work on some completely serverless-first projects. This year, I’ve been working with the teams in our digital ecommerce space in Boston, looking at serverless, looking at AI, and I've just moved to our Data and Analytics unit. I definitely have a big focus on driving the serverless mindset in the company and also trying to get involved in the serverless community as well. And I'm also looking at AI sort of from an engineering and serverless perspective. So how far can we get using the managed services? How do we bring it into a large enterprise systems? Because, as you said, Liberty Mutual is a huge company.
Jeremy: Great. All right, Mark, what about you? You're an Architect there. Why don't you tell us about yourself?
Gillian: Yes, so Mark, and I work in very different areas, but we try to be really collaborative across the company and sort of some of these bigger things like serverless.
Jeremy: Well, Mark, it's good to be talking to another developer old-timer like myself. I appreciate that. Alright, so let's start, because again, Liberty Mutual is huge and you are actually part of Liberty Information Technology. So these are separate companies and I'm fascinated by how large organizations work and how all things are distributed and stuff like that. So maybe one of you can explain to me and to the listeners, what's the relationship between Liberty Information Technology and Liberty Mutual?
Gillian: Yes. So we did say Liberty Mutual had about 50,000 employees. About 4000 of those are in IT. And the company we work for Liberty IT is a wholly-owned subsidiary of Liberty Mutual. We have about 600 software engineers based between Belfast in Northern Ireland and Dublin in Ireland. And they are all fully focused on delivering world-class software and solutions for Liberty Mutual.
Mark: Yeah, so we're very much a software house, focus on high performance engineering and really delivering those world-class solutions that Gillian mentioned. So we're slightly different from the rest of the Liberty Mutual sort of area where they may have a mixture of developers and business. We're very focused on software engineering.
Jeremy: Awesome. Alright, so that makes a lot of sense. Thank you. Alright, so I want to talk to you today because you both mentioned a lot about serverless. Liberty Mutual is obviously embracing serverless. So let's talk about that. How is or how Liberty Mutual is embracing serverless? And maybe let's start just by sort of how did the team kind of discover serverless like, what was the point where you said, “Hey, let's start looking into this new technology?”
Mark: Yeah, I think that also goes back to where, in 2014, where we started our public cloud journey, I guess. So the form the public cloud came, and they started opening up the access to AWS and seeing how this whole new cloud thing would work within the big enterprise. And so a lot of that start in 2014 into 2015 was really just dipping our toes in the water and exploring cloud capabilities. Then we had very little workload in there. Coming into 2015, it was starting to get the initial learning, starting to set up the pathways to get to the cloud. Try to build in those capabilities that a big enterprise like ours needs. So real focus on security. Real focus on how do the development teams actually get access to this stuff. You know, what are good practices? And again working with AWS and partnering with them to figure out what that looks like. So our public cloud team did a really great job in starting to explore the space and open up for the enterprise this new awesome capability that’s the cloud. And then into 2015 or 2016, we were into the, you know, starting to really think about what apps we could we migrate to the cloud. What modernization can we do? What sort of approaches could we take so that we can break down our big monolithic applications and break them into things that will actually fit in the cloud. And then all the way through to 2016, we have maybe 10.5 percent of our workload’s in the cloud. 2016-17 we’re in the 12.5%. Then 2018 we’re at 20%, and now we’re up to about 30% of our workloads, plus, in the cloud, and all through that time it’s been around developing the capabilities, developing the expertise and partnering with AWS, and really learning what the cloud capabilities are there. A lot of this was traditional through EC2, RDS-type work, moving the workloads into that. And then we get into containers, we get into the whole DevOps practices, and now ultimately, we're starting to really get after serverless.
Jeremy: Okay, Awesome. Alright, so then let's talk about just your strategy for adoption, right? And we can probably go a little bit deeper into the timeline as we get further into the conversation. But let's talk about the strategy. What was your strategy for adoption?
Mark: Yes. So our strategy, really at the minute is to create an environment for serverless to succeed. So we mentioned the work that our public cloud teams has done and that’s really been about building the CI/CD pipelines, building up the developer access, making sure that as a new developer joining a team, there's nothing blocking me from experimenting with the cloud, delivering some innovative solutions in the sandbox environment and then ultimately promoting that though all the different stages to production. So a lot of the work has been done by the people have come before us — Gillian Armstrong, Delia McCann in particular, have really helped accelerate a lot of the public cloud capabilities that we have. A lot of that is now in place. So what we've been trying to do now is evangelize what a serverless-first approach looks like. Educating our leaders, educating our developers so that whenever they're looking at any potential problem that we usually have, our customer has, they are trying to adopt that serverless-first approach, seeing if a serverless-first approach works for that particular problem and seeing where it’s a good fit.
Jeremy: So is that something now where — you said the serverless-first sort of thing — so everything you build now, you just do serverless?
Mark: Not necessarily…
Mark: ...as we much as we would like to say yes…
Gillian: I wish.
Mark: We want them to have a serverless-first mindset and a serverless-first approach, but we know that it's not a good fit for all contexts just yet. So we have a number of fallback options on that serverless spectrum that we want our teams to fall back to, but ultimately, we want them to at least try to see if the serverless-first approach works. Does it work in full-on Lambdas with managed services Is that a good fit? Does it give you the solution you need? If it doesn't, then let’s fall back onto some sort of container solution potentially or maybe some sort of past solution. And ultimately, if you fall back far enough, you might end up back on prem. But then we hope that you're really trying to aim for that full-on serverless experience.
Jeremy: It sounds like it sounds like Gillian doesn't agree with you 100%. So Gillian, get your take on this.
Gillian: I think we find that anything we build fully greenfield, we really have been able to go completely serverless. Even with huge enterprise systems, I know we’ll talk more about that. I think people feel like serverless is just a little thing. It's great for playing about small applications, but we've really been able to prove that we can build very sophisticated enterprise systems. The challenges do come when you are dealing with legacy applications in your 100-year-old company. You do tend to have a lot of stuff left about from from back in the day. You know, I I remember when you if you needed a server, someone maybe had to drive it on the truck to the data center and you had to wait until that happened.
Jeremy: Those were the days.
Mark: No, it’s really because we're evangelizing this serverless-first approach. There are some challenges to that way of delivering solutions. So we're trying as a strategy to identify what those challenges are ahead of time. Trying to make sure that the managed services that we want our team to adopt are actually available to them. They have been through our security assessments. They have been added to our allow list of services that we can use. So if you want to use something like AppSync, it's been pre-approved, that's been enabled in our accounts. It's available for the development teams all the way to production. So not only re-pushing the team's adopt this serverless-first approach, we really want to make sure that the serverless capabilities we would like them to use are actually available to them. And so we've been working very closely with our public cloud team, our DevOps teams, but also security, legal, privacy teams to make sure that the services we are asking our teams to at least evaluate are actually secure, cost-optimized, any sort of legal or privacy concerns have been mitigated. So we've been trying to put a lot of those things in place to make sure that it's not just your architects telling you to go serverless-first and then not enabling any of the teams to actually do it. We're trying to make sure that those blockers have been removed on the pathway to serverless.
Jeremy: The labyrinth that is the giant enterprise, right? So how did you get going with with serverless. I know you talked a lot about the cloud, you have a cloud team and things like that. But is this something where it was sort of like a ground-up approach where the developers are bringing it in? Or is this something that came down from the top?
Gillian: Yeah. So we do have a cloud team, but they manage our cloud accounts. They, as Mark said, there's a lot of enablement needs to be put in place to make sure that as developers move from on-prem to the public cloud that we’re given good governance and we're not going in and opening all the ports to the whole world. So they very much put those structures in place. But then it was the teams that came along and actually built applications that started to prove that it really could be used. So I was on one of those teams with Mark's wife, Gillian McCann actually, And we built out an employee digital assistant, and it was something that definitely brought a lot of learnings. Where were the problems with security? Where were the problems with how do you observe? How do we audit? What's going on? And then really from that, and from some other projects that were going on, we were able to start to really learn about can we use this in-house? What are the blockers? What are the barriers? And go back to the public cloud team and get them to support us, get them to help us, start building out CI/C pipelines. I know Mark’s been doing a lot with developer enablements and helping other developers. And everybody doesn't have to sell the same problem over and over again because serverless is supposed to make it easier.
Jeremy: Supposed to.
Gillian: Supposed to, yes.
Jeremy: Yeah. So what about leadership? Because especially in the multi tiered organization, where does the influence come? Are you working with the leadership to try to get them to kind of integrate that into their vision or what's happening there?
Mark: Yeah, absolutely. And part of the job is to articulate the benefits of serverless and the serverless advantages that come with it. And talking to your leaders, talking to your senior your leaders and making sure that they understand the value proposition of a serverless approach. You know, it really is the ultimate cloud capability that we want to pursue. We want to focus our developer efforts on delivering business values, so really trying to articulate that we do want to deal with this undifferentiated, heavy lifting. We want to focus on really solving real customers’ problems. And because we are a big insurance company that means real people are ultimately the users of our software capabilities. So we're very acutely aware that we don’t want to waste time, money or effort on stuff that actually doesn't have an impact to our customers. Ultimately, that's our mission here as an insurance companies, is to really actually help people.
Gillian: It's been great because we know our CIOs have publicly stood up and told the company that this is a new approach that they recommend. This is the go forward. So it is great to see the message coming down from the top as well as coming from the grassroots and then the developers as well.
Mark: And recently, with the senior leaders in some of the spaces adopting a serverless-first approach is now in some of their OKRs and some of the objectives that we're rolling out across the thousands of developers that we have in some of our divisions. So it's a nice stated objective off our teams that we need to evolve to meet this serverless challenge.
Jeremy: That's awesome. Alright, so now you mentioned AppSync is one of the things you kind of approved. So you must be using a lot of managed services though, right?
Mark: Yeah, absolutely. Across the board, we're using pretty much everything. So depending on the team and the problem at hand, you know, we have everything from the typical Lambda, DynamoDB, the Kinesis stuff...
Gillian: API Gateway, SQS, SNS.
Mark: Yeah, AppSync is more recently, as we’re starting to get more of the GraphQL and evolving from full-on RESTful services to much more GraphQL-type services. And yeah, we're in CloudFront. Everything really. We have access to the full portfolio of serverless capabilities pretty much across the board. There are some enterprise decisions that were made that have Route 53 for example, we don't have access to. We have our own DNS solution. So that becomes some of the challenges that we have when we start evangelizing serverless. Developers go on the databases or they go on to some of these talks to people like yourself or some of the patterns that are right there and they see these things solutions, and they try to bring them in-house. So maybe they don't work quite the way that you would expect, because some of the capabilities aren't allowed for us for good reasons. And so we need to, unless something we're working on a lot is to take those patterns and make them work for our ecosystem in our context, so instead of this DNS provider, we’re using this and you update the pattern or update the templates or update the CloudFormation to work within the ecosystem that we have. So a lot of that’s around identifying blockers and barriers and making sure that we've got a solution in place for them.
Jeremy: So okay, I don't know if this is true, but my understanding is that if you're a technology company in the UK, you have to use Wardley mapping now. Is that true with you guys as well?
Mark: 100%. We use it a lot for our strategy and most recently have been using it to talk to my teams that I'm sort of responsible for to really understand what the team purpose is. What their, who their users are, what they're delivering to their users, but also to get a real understanding of their tech stack and I'm using the Wardley mapping to see how they could evolve their tech stacks to meet these serverless builds that we have as a company. So it really helps me to talk to the team. It helps me to actually show the teams this is where we're heading and it also gives us fast feedback on any sort of blockers that that may have to serverless adoption.
Jeremy: Awesome. Alright, so let's go back to the timeline. So you mentioned 2014, public cloud. Somewhere around, I think you said 2016, you started breaking down some of the monoliths. So maybe we start after that — after you broke down the monoliths and you started moving into actually bringing things to the cloud. Let's start there.
Mark: Yeah, so, we started the digital assistant work sort of kicked off around 2016 as well — the work that Gillian and Gillian McCann were doing. We had a lot of the security teams were starting to build out serverless capabilities there as well. Some of the auto remediation stuff and we can talk about those because they’ve talked publicly at re:Invent and other areas about some of these capabilities, but they built in a lot of really awesome security capabilities in the serverless way. It provides guardrails for the development teams and makes it so that we can do things that are against to our company policy. But it frees us up to go off and try and experiment with stuff. It'll give you a nice feedback on don't open these ports or you can't use this particular capability or everything has to be encrypted at rest and encrypted in transit. So again part of that developer enablement is enabling them, but putting guardrails around some of the things so that we're not exposing ourselves to risk.
Jeremy: So you started — a lot of the serverless stuff you were doing originally was internal, like, sort of compliance, that sort of stuff, right?
Gillian: Yeah. So some of the first things were utilities for the public Cloud team themselves. They were definitely the experts at the time. And then we really moved from that to other internal systems. So we do a lot of employee-focused software and when we build internally for ourselves, we do have the opportunity to use a lot more emerging technologies, to take a little bit more risk, experiment on our employees for employees. So we have an internal productivity tool that we're actually selling externally now to other customers. We called it MyHub. On top of that, we built the digital assistant. It was a chatbot, and we built it fully serverless. And alongside that we had a few other small tools internally were being built-out and that let us work with those security teams, that sort of public cloud team that exposed where we were having issues, let us negotiate. So you know, we have to go forward. This is the future. So how do we this? Not, you know, if we sit down with the legal team and security team, it's not can we do this? It's like, we have to get to the cloud. We have to be able to use these tools. This is the future. So let's work together to work out how we can do it and alongside that, working out patterns, working out sort of best practices, if there is such a thing as a best practice. Creating resources for other people, here's the CI/CD pipeline for how you can deploy this. Here is a custom resource you can use yourself , and then as we moved on and hit the customer-facing applications, they were able to move really, really quickly that a lot of barriers had been removed for them.
Jeremy: So before you did that, though, in terms of the actual like customer-facing workloads were you — I think you mentioned something about containers — were you still sort of going down the container route initially?
Mark: Yeah, absolutely. And I think 2015, 2016 into 2017, we were very much still moving down that move to microservices so we did a lot of work talking to the teams, doing things like event storming to really break down the monoliths and the more bounded context and sort of microservices. But building out the containerized solutions, whether it's a Pivotal Cloud Foundry or Docker data center we had, you know, these cloud-ready, cloud-native, sort of containerized environments. And so we were doing a lot of work there to break down the monoliths and the microservices and moving them onto containers and getting them right onto the cloud. So a lot of the work we did around 2016 and 2017 was really around sort of moving that stuff into the sort of workloads as well as also the work that Gillian and others were doing around the serverless stuff. So a large bulk of the work was really moving that to that containerized world. And that gave us a huge amount of advantages and really accelerated a lot of our cloud adoption and accelerated our time to market. And like Gillian mentioned, we were doing a lot of education, building a lot of patterns and building a lot of the good practices around into what does an event-driven architecture look like? What do microservices look like? What does that mean for our teams? What ways do we educate our team? How do we train our teams? How do we help guide them on some of this stuff? And then, ultimately, we've got to look, we've had a lot of success with that, and that's helped really pushed a lot of our workload onto the cloud. Now we’re into the next phase of evolution, how can we then just move on to the serverless-first approach. Containers will still play a large part of our future, I think, for a while, but ultimately we want to evolve and move everything to that serverless ecosystem for all the advantages that serverless brings.
Gillian: I managed to somehow miss containers and skip past them. No one’s going to drag me, back. I think it is interesting. So as Mark said, obviously, we have a lot of applications we couldn't get out quickly to the cloud if we if we didn't use containers. So in that way, that is a step between, you know, going cloud native, going serverless. But realistically, you know, containers does not get you closer to serverless. It just gets you onto the cloud. So I always say to teams, “Look, if you've not gone to containers yet, see if you can just skip past them. Don't spend the time putting on containers. If it is possible to go to serverless.”
Jeremy: Yeah, I totally agree. I mean, the problem that, you know, I always see is that you can't lift and shift to serverless. It has to be a rewrite and just a lot of teams don't have that time to do that. So I have nothing against containers. I think containers are great, especially when you're breaking things in the micro services. But I totally agree. If you can skip them and go right to serverless, that is my preferred approach as well.
Mark: Yeah, 100% agree. I think having those fallback options as well is good and making sure that people know that the problem they're trying to solve and the context that they're within, and that’s the whole rationale behind the serverless-first sort of mindset we're trying to push. Try to do this in a serverless-first way and we think it will solve, it'll be suitable for a lot of the problems we're trying to solve, but if it's not a good fit, we have plenty of other options.
Jeremy: How much of your workload is in the cloud now?
Mark: So I think we're due from 2017, 12.5% and 2018 we were 20%-ish. Now, we’re probably high 38%. But that's always increasing. So there's a lot of acceleration there in the cloud. So we're really starting to accelerate that, that move and moving everything into the cloud properly.
Gillian: Yeah, there's tens of thousands of servers, you know, sitting in our internal data center. So it's amazing that we've already got to almost 40%. Hopefully, we're going to keep going, keep accelerating but I know that the last ones in there will be, there’ll be dragons in there.
Mark: But I get you anything that you do, we’re trying to make it so that they build serverless-first, and all the pathways and the blockers for serverless adoption have been removed.
Jeremy: I can imagine with an enterprise like Liberty Mutual, you probably still have some servers that are actually writing onto stone in order to save information. So I've talked to a couple of development teams and a few development team leaders — relatively small teams, maybe five to 10 developers that are trying to get their teams to move to serverless. You have 600 developers, you said, that work for Liberty IT. So let's talk about how you get the word out, right? How you train people, how you evangelize serverless internally, how you enable those developers. Let's kind of start there. What do you guys do to get the word out?
Gillian: Yeah, so the first thing — we say not everybody's a cloud developer yet. But so the very first thing we needed to do is really educate people about the cloud and the benefits of the cloud. You can't get to serverless if you don't know how to build on the cloud. So there has been, for several years now, a lot of support put in place to let people learn about the cloud, to get their certifications, to get time to do that, and as we came a couple of years ago to start thinking about serverless, initially, internally, we were trying to share how we have internal systems where you can blog. We have internal tech talks. So we spent a lot of time talking about what we were doing around different groups. But ultimately, the big tipping point came when myself and Gillian McCann got a spot at re:Invent and went and talked about a solution we were building. Because sometimes being able to go externally and showing that your expertise is on par with other people in the industry is actually a really great way to get people internally to listen to what you're doing, to pay attention, to actually find out about what's happening. Since then, we have had a lot of people out at Serverless Days, Serverlessconf, QCon, and sometimes that's actually more effective because people will go and listen to the recorded talk. Then they'll maybe come and talk to you internally. We've also been doing a lot of things where we get AWS, Google to come in and run workshops or give talks which gets people away from their desk for a day doing something. And then also we're getting teams to do informal hackathons, engineering days, letting them try things out, which really sort of leads into that developer enablement because that's part of, you know, enabling them.
Mark: Yeah, And even on top of that, we've had cloud native open spaces where we get the whole company together just to talk in a very open way about cloud adoption and the challenges and the successes that teams have hard and just really get developers to talk to each other because we're a large company, so lots of people have had a good experience or challenges with certain things and just getting developers to talk to one another has been a great empowerer for this adoption over we're trying to get there. So we're very collaborative, engaging sort of company, and that's been in our culture. So we really want to our engineers to share that and really try to help each other.
Jeremy: So how do you sort of, you mentioned things like repeatable patterns and different models that people can use. How do you share those? Do you just have a sort of internal Wikis and things like that? That that's where that stuff goes?
Mark: Yeah, we've had a number of big initiatives around their enabling these cloud native approaches. So within the GRM in which is another customer-facing insurance area, we've had a DNA, which is digitally native architecture approach, so we built a team that took all these patterns and made them sort of code executable for developers so they could go to dashboard, click a few buttons, and they would have a fully cloud native solution deployed with CI/CD pipelines, with all the security checks and balances, with all the quality baked in and that would enable you to deliver a new cloud native API all the way to production in a few minutes, and similarly, in the area I am in now, we're doing the exact same thing, but with the serverless-first pattern. So we were baking the patterns into templates that are then executable by developers and really sort of make sure that all of our good practices and security standards and quality are baked into those patterns. So with a few clicks of a button, the developers have a full-on serverless solution with the pathway to production all automated and ready to go. So that's our trend to sort of capture all the good stuff that Gillian, Gillian McCann, others within the company have done. We want to bake those good practices into these repeatable templates that are really executable. And that's why there's something like CDK coming out. It has really piqued our interest. We want to try and capture some of our infrastructure good practices and make them CDK constructs so that again, we can accelerate that developer enablement so that these well-proved, hardened, secured, complaint capabilities are available for all our development teams.
Gillian: So we have an “inner source” as well, sort of like open source. But internally, we have repositories where people can share things, GitBooks are always a big fan of people. There’s Slack channels where people share different links to different things, either externally or internally about serverless.
Mark: But ultimately it's that serverless-first approach. We want to remove any undifferentiated heavy lifting, so we want to package up and automate as much as possible. So that if a developer needs a full-on serverless stack, they shouldn't have to go and craft it themselves. There should be good stuff available to them, to accelerate, so that they can truly focus on business value.
Jeremy: So what about giving people time to sort of experiment? Because even whenever I come across a new service or service that I might have been using, but now I want to do something, codify it with CloudFormation and automate some of the processes and things like that. I mean, that just takes time. I could spend an entire day just, you know, messing around with a CloudFormation template sometimes. Do you give teams the ability to do, you know, sort of experimentation on their own?
Mark: Absolutely so, because we are software engineering sort of company within LIT, and we have a culture of engineering excellence, we need to give teams capacity to learn, to explore, to play with some of these new capabilities. So, a lot of our teams have innovation time, to dedicate an innovation time that allows them to explore and experiment with some of this stuff and it doesn't even necessarily need to be a lane to your business goal. Can we just explore this new technology because it’s cool. But we have encouraged all of our teams to have 20% time pretty much at least to explore new capabilities and learn and read and do the right thing for their teams. So, yeah, with the pace of change and the pace of new capabilities coming out, if we don't have that capacity for teams to explore new technology, you quickly get left behind.
Gillian: Yeah, I think also we try to educate product donors and other groups outside of IT that when they're bringing in new technologies, they're actually using for features on their projects that may take a little bit longer. But ultimately, you know, they will go faster and rather than just demanding that they keep accruing tech debt, you know, repeating everything that's ever been done, but they let them, you know, have a little bit of time to bring in new things.
Jeremy: And is that something, I mean, I can imagine if you've got 600 people or, you know, however many people are working on different serverless projects, they've got to be discovering new things, better ways to do it, so even if you've codified a pattern or whatever and they say “You know what, I found a better way to do that” — sounds like very grassroots, like it just kind of works its way back up through the system?
Mark: Yeah, yeah, pretty much. And we have a lot of vehicles for people to share. There's internal Wikis; there’s internal collaboration platforms that we have. There's pretty much all the teams would have some sort of tech talks or a regular sort of schedules, sort of show-and-tell-type time so that they really, if one team has found a new way to do something, or say a new capabilitie been released on AWS, or on-demand for Dynamos, you know, on-demand sort of capabilities will really reduce the cost of your Dynamo instances. If the team's already turned it on and figured out how the CloudFormation gets updated to enable that, they may talk to another team from the other side of the building and say, “We did this crazy thing and saved us hundreds of dollars a day, you might want to do that.” So we’ve a very collaborative culture within LIT and so if somebody steals something cool, we very quickly hear about it.
Gillian: Very. The absolute joy of a serverless architecture is that it's evolving. Because it isn't this big upfront design when an architect goes and creates a massive big architecture diagram and then prints it out and puts it on the wall, and that's it for the rest of time. Because it is evolving; because it is easy to take pieces in and out and change them. That lets your team, you know, contribute to the architecture. It lets individuals maybe have a bit more expertise and come along and say well, we should change this. If new patterns are emerging, you can go back and update the architecture with a lot less overhead than it would have in the past. Maybe not with none, but certainly the joy is that it is a much more flexible and changeable way of building something. And we definitely in the serverless world, in the serverless community, there are definitely debates about the best way to do things. And I think people are coming up with new and better ways as new tools are coming out. As new functionality is coming out and I know I've been to a lot of conferences and listened to a talk, and it's just completely blown my mind. And I've gone, “oh, I've just been doing this thing completely wrong.” So I think it's pretty exciting. One of the joys of working in any emerging technology is you don't have to worry so much about decisions because you definitely have made the wrong decision, and you just have to accept that you're just gonna have to keep changing and evolving.
Jeremy: We'll always find too where it's something like, you build something really cool, you find a great work around to do something, and then, like two months later, Amazon releases a way to do it in one line of code, which is great. I mean, I that's actually one of the great things about just where we are in serverless, and the serverless space right now is as we start to find use cases and people actually start to use it, that's when you start butting up against those limitations and AWS and Azure and Google, they're all working to get rid of those, which again just is amazing.
Gillian: I honestly think that they lie in wait, and will wait and wait. And maybe a new feature will come out and then we'll spend, you know, four or five weeks building it ourselves, cause which along. And then the day you check in, it's done. You'll like wake up the next morning, you’ll look a Twitter and there'll be a blog post dropped, saying “Hey! That feature’s there.” And you’ll say, “No!”
Mark: And again, I think our teams are well aware that any sort of custom sort of workarounds that they’re building they may have evolve to use whatever the managed service delivers, because, you know, just be prepare to throw this stuff away. Because, you know, sooner or later, somebody will bake it into the ecosystem you're working within. And then you know that custom built thing you need is is no longer relevant.
Jeremy: Still worth it, though. No, still worth it building those things. Alright, so you mentioned a couple internal tools and stuff like that. But you guys have built a ton with serverless. So let's talk about some of the success stories. I think that'll be interesting to people, especially in a large organization. So why don't we start going with some of those internal tools again, like the employee digital assistant? What's that about?
Gillian: Yes, so the employee digital assistant. It's a chatbot. Very exciting. This is a little bit where I get to dabble in AI, and applied AI in that sort of serverless mindset where we pick up those managed services. And it was really based around making things easier for employees. So nobody wants to spend all their time searching for things on SharePoint or emailing someone for an answer, doesn't get back to you for a week. So what we did is we hooked up a whole pile of internal functions. We hooked up our internal help desk, finance, HR, and got them to put a lot of different things. You just go and ask the chatbot whatever you want, you know. So, “Something's going wrong with my payslip this month. What do I do?” Or everybody's favorite one, “What's on the cafe menu today?” Which was the top search from the intranet. So because we're building it completely from scratch, it was this amazing opportunity to build it completely serverless to try things out, and we were able to experiment a lot. And one of the most exciting things is, from that, we now have a whole spinoff company called WorkGrid that is a startup that has spun out from the Liberty Mutual that has taken that, rebuilt it as a SaaS solution, and is now selling it to other companies.
Jeremy: Awesome. And you also have something called Radar. That's your cloud adoption one?
Mark: Yeah. So we can talk about this because we've talked about it at re:Invent, so it’s cool. Our public cloud team created the security or auto remediation capability tool that really helps to prevent sort of any sort of bad behavior from developers and creating their resources or playing in the cloud, doing stuff that we shouldn't be doing. So it has a number of your security policies as code that will prevent us from creating new reserves with open ports to the world or not encrypting at rest or not encrypting it in transit. And it gives you nice reports and it gives you feedback, you know, where you're going wrong, where you're not following company policy but it also auto-remediates as well so it’s also triggered off CloudWatch events and will actually rectify and stop you in some cases. But really, it's an awesome enabling constraint because it means that our good enterprise practices are baked into an automated policy. So that security policy as code really helps keep us on the straight-and-narrow and so I think it really, it's not about the security, the Department of “No”. It enables us to go faster, but go faster with a good practice and good security.
Gillian: You're going to you every single developer in here who has the very first time they started to build something out has found their resources disappeared because they didn't tag them properly.
Jeremy: Yeah, well, that's good though. Alright, so what else? Any other interesting internal projects?
Mark: Yeah, so there's a lot of good work in our sort of financial central services space where we’re processing hundreds of thousands of records a minute in serverless and that’s all through step functions, Lambda, SNS, SQS, Kinesis, DynamoDB. But that's huge volumes, you know. That's really starting to stretch your serverless and the managed service capabilities. And so that's ongoing at the minute. But it’s a massive sort of success, and we’ve had _____ talking about it at Serverless Days in Dublin around some of the cool stuff that's going on there. So it really though, you know, serverless is not just for your simple APIs and simple, you know, utility-type libraries for the core sort of financial processing engine of the company. So it’s almost like shows that serverless is ready for pretty much every problem you can throw at it.
Jeremy: That's amazing. Alright, so what about customer-facing stuff? Have you put anything out there now that your actual customers are using that’s built in serverless?
Gillian: Yeah. So one of our biggest ones out there is the virtual agent, which is in our call centers. They've deployed a virtual agent that's answering some of the calls, so it's taking some initial stuff or answering really simple stuff. And they were actually able to use some of things from the digital system, some custom resources, some of the patterns we put in place for the use of Amazon Lex, which is Amazon's natural language understanding service. And that let them move really quickly. They were picking up with Amazon Connect, which is their call center as a service as well, which was a brand new tool and a lot of learnings. But it's massive. I mean, the cost is so low, because it's all managed services, and they're able to, you know, bring it out, trial it with users for a little bit, make sure the whole system works, and then just scale seamlessly up. And they're just adding in more and more functionality all the time. Ah, and in fact, they presented that last year at re:Invent so that talk is out there too.
Mark: I think one of the early really customer-facing ones was around the document generation, policy generation sort of capabilities and within our sort of Liberty Mutual benefits space, and a lot of the underlying document management, document generation stuff is very heavily built on serverless and around the same sort of time as the digital assistant, so there was two sort of teams really help pioneer a lot of serverless capabilities within the company and helped to prove that this is something that is going to be a game changer for us as a company going forward.
Jeremy: Awesome. Alright. So let me ask you this question because obviously you two are deep in the enterprise world. You're on the forefront of bringing serverless in there. So what would be your advice to other enterprises looking to adopt serverless?
Mark: I think getting access is number one. Creating that access to some sort of sandbox environment where your developers can explore an experiment risk-free whether getting chewed out by some manager for what are you doing opening up in AWS account in your credit card. So I think that was key. Also, we had a sandbox environment that we could explore new features and play a little bit and actually even access the console. Beyond that, having a clear pathway to production is critical and our public cloud team has done a fantastic job really automating all of the security compliance or legal issues that we may have and making sure that they are part of that automated pathway to production so literally a developer could create something today, have it in production this afternoon. And that's how you automate it and how fast, we can deliver capabilities with that clear pathway to production is a big enabler for us because a company. I think you can't compromise on security, so there's a real enabler there around don't do anything risky, so make sure you know what your security profile is and make sure that you have on approach for dealing with security in the cloud space. We've spent a lot of time working on threat modeling and making sure that we work with our security architects and security teams to to really make it easy for developers to show the risks that they may have and show how to mitigate so I think that zero compromise on security is number one for us. And ultimately for us, if you're starting out, testing is a big thing, you know, really focus on those good testing practices. Making sure you have testing, unit testing, integration testing for it’s different in a serverless environment. But that allows you then to go safely — quickly, but with safety — so have a real testing approach and then… Observability is probably the big one for us. So monitoring observability, making sure you know where stuff is and you're getting appropriately alerted, alarms and everything's things happened is a big one for us. Gillian, do you have any?
Gillian: So I tell people, especially in big enterprises, the same for both serverless and AI, which is: start now. You’re already behind. If you haven't started, you need to start now. It takes a while to learn all the things that Mark’s just said — a lot of things. It takes a while to move your mindset from highly architected things before to serverless. Serverless is very different, even the microservices. So even if you're familiar with microservices. This is still a different paradigm. So it just takes a little while to learn. It takes a little while to move all your existing practices and thinking about how you build your systems. So you need to start. You need to find places that are sort of safe-to-fail places where you can try things out and then gradually scale up. I think the big thing is, if you run the company, do you create time for people to learn. Do let them have that space and, you know, find your people who are really, really passionate about it and then let them loose.
Mark: I think that was key for us. We had Gillian Armstrong, we had Gillian McCann, Laura MacFarland, Chris Gormley. We had a number of real serverless pioneers in the space. We really do blaze the trail and opened up a lot of number of doors for everybody else to come from behind. So kudos to those guys.
Jeremy: So I think that we can sum up your advice, Gillian, maybe by using that “old planting a tree” proverb. The best time to start with serverless was five years ago. And the second best time is now. Right?
Mark: I think one of the big things that we've certainly noticed is, you know, getting certified has actually been very useful for us. Certifications for certifications sake are about pointless. But we find that really has helped accelerate the knowledge of our development teams. It gives them something to aim for, but also maybe it's just the way that the AWS searchers are set up. You know, they've been very applicable to the technologies and the approaches and the patterns that we are preaching our teams to embrace. So the AWS certification journey has definitely been a worthwhile one for our company. And I think we have reached the tipping point for that. We're over for 10% certified now, so it's really helped accelerate so that whenever we talk to our teams, they know what we're talking about, which is usually a good first step.
Jeremy: Definitely. Awesome. Alright, well, listen. Thank you, Gillian and Mark. This has been an awesome conversation. How can we find out more about you? Let's start with Gillian.
Gillian: Yep. So Twitter, I am @virtualgill on Twitter. And Twitter's definitely where I hang out most of the time, and always happy to have conversations with people. My website is virtualgill.io, if you want to look at some of my talks or read some of my blog posts where I rant about various things. Um, but definitely Twitter’s the best place if you want to have a chat.
Jeremy: And that's @virtualgill with the “G.
Gillian: With a “G,” yes.
Jeremy: Okay. And then Mark, what about you?
Mark: Twitter, I’m probably on their Twittering all the things serverless, so you can get me at @markmccann on Twitter.
Jeremy: And then if people want to learn more about Liberty IT, the website for that is just liberty-it.co.uk, right?
Mark: That's correct. Yeah.
Jeremy: Awesome. Alright. Well, listen, I'm going to put all this into the show notes. Thank you guys so much.
Gillian: Oh, there is one more thing, Jeremy.
Jeremy: Oh, there's one more thing.
Gillian: One more thing, right. One more thing. Yeah. There's going to be a Serverless Days in Belfast.
Gillian: It’s going to be November. So I'm helping to organize it. So if people in podcast want to come, then they should definitely, follow me on Twitter for when we tweet the formal announcement.
Jeremy: Awesome. Alright, Thanks again.
Gillian: Thank you
What is Serverless Chats?
Serverless Chats is a podcast that geeks out on everything serverless. Each week, Jeremy Daly chats with another serverless champion to explore and do a deep-dive into specific topics in the serverless space.