Serverless Chats

In this episode, Jeremy chats with Ben Ellerby about the evolution from digital to serverless transformation, why hands-on experience is important to understanding what serverless actually is, the current problems with complexity, and why you can't be cloud native without embracing some form of lock-in.

Show Notes

About Ben Ellerby

Ben is VP of Engineering for Theodo and a dedicated member of the Serverless community. He is the editor of Serverless Transformation: a blog, newsletter, and podcast which share tools, techniques, and use cases for all things Serverless. He co-organizes the Serverless User Group in London, is part of the ServerlessDays London organizing team, and regularly speaks about Serverless around the world.

At Theodo, Ben works with both new startups and global organizations to deliver digital products, training, and digital transformation with Serverless across London, Paris, and New York.

Transcript

Jeremy: Hi everyone. I'm Jeremy Daly and you're listening to Serverless Chats. This week, I'm chatting with Ben Ellerby. Hi, Ben. Thanks for joining me.

Ben: Hi, Jeremy.

Jeremy: So you are the VP of engineering at Theodo, and you were just recently named an AWS Serverless Hero, so congratulations on that. So why don't you tell listeners a little bit about yourself and what Theodo does?

Ben: Ah yes. As you mentioned, I'm the VP of Engineering for Theodo. We help other companies launch digital products, be that startups, launching their initial MVPs, to large companies attempting a digital transformation. And more and more I'm helping our clients to use serverless. Be that through building their initial MVPs, but also training and upskilling their developers. So we're based in London, New York and Paris. And basically my role is to help coach our developers, and help us find the new technology areas we want to work on. And serverless has been highlighted as the main area we're trying to move towards. And many of our clients are starting to adopt serverless first architectures.

Jeremy: And what's your background?

Ben: My background, I've been at Theodo in London since we kicked off a team here about four years ago. Before that, a bit of time at IBM. And before that studying computer science.

Jeremy: Awesome. All right. So you mentioned digital transformation, and we've heard this term a lot, especially over the last couple of years. And I think some people think that means sort of moving from on-prem to the cloud, or sort of modernizing things. But you've been using this term, serverless transformation more recently. And essentially, this is this idea of going, I guess your second move to the cloud. Right? So could you explain what you mean by serverless transformation?

Ben: Yeah, sure. So what you touched on was digital transformation was that initial move to the cloud, which smaller and larger companies have managed some, with varying degrees of success. I actually helped a company called Junction launch their initial product about two years ago, which is an AI service that helps large companies plan their migration to the cloud. And that was very much a lift and shift approach. But more recently, if we take the example of Junction, they've had more and more targets going to things like SaaS, and FaaS and serverless first approaches. When I talk about serverless transformation, I'm talking about startups who are launching their initial MVPs and doing that in a serverless first approach, but also larger companies who are trying to consolidate their developer resource by building serverless first architectures, rather than managing infrastructure. And more than just managing infrastructure, common application things like authentication, moving to that as a service and really leveraging everything as a service to focus their development teams on the core business value that they're adding, the distinct business logic that makes their company who they are.

Jeremy: Right. So I mean, it's more about that lift and shift approach. And I think we've talked about this on the show a number of times, that trying to just sort of move everything as is from your on-prem into cloud is a bit of a fool's errand, right? I mean you're essentially copying this local environment, but you're not getting the benefits of the cloud environment.

Ben: Sure. And it has some benefits like virtualization was an initial move. Containerization was another move. And now we're seeing sort of function as a service, and other things as a service. As soon as a further level of abstraction. The higher that level of abstraction goes, the more business value I think he gets.

Jeremy: Awesome. Right. You wrote a post called In Defense of the Term Serverless, and nobody seems to want to have this conversation with me anymore. Because I have been very outspoken. I had a post a while back called Stop Calling Everything Serverless, where I tried to essentially define what I thought the term serverless was. And for me, I look at it as not a technology, not a managed service, not FaaS, not some sort of a spectrum or a ladder or these other things that I think are really, really interesting ways to try to classify what it is, because it's such a hard term to sort of grasp. But I look at it more as sort of this process of using these services that don't require you to really have an active involvement in the management of the infrastructure.

And to extend that even further in some cases where possible where you don't have to even worry about the scaling or provisioning a cluster of something like, I don't know, a cluster of Elasticsearch or something like that. So I think this is a perfect opportunity because in this community now we have a lot of forward thinkers and I think we want to move past this idea of what is serverless? The problem is that our community is for the most part an echo chamber. Right? And we keep having this conversation every once in a while when somebody from the edge sort of asks us this question. So, I'd like to get your definition of serverless and why you think that term actually is really important.

Ben: And I think it's a long answer. So I've recently been working on the sort of preview chapter of the book, Serverless Transformation at any scale. And the first chapter of that deals with what is serverless? It talks about that move from virtualization to containerization and then function as a service. But then it talks about how it's not just function as a service, it's using cloud native technologies as much as possible, which makes it a hard thing. It's not a binary classification between serverless, not serverless. It's a polymorphic space that keeps changing and keep adapting. I think we can place different services on a spectrum of serverlessness. So Elastic Beanstalk is obviously not serverless, but it's more serverless than manually provisioning EC2 to instances that goes all the way up to using something like Cognito, which is very little code. It's really the cloud provider providing that logic for you.

So I think you can put things on that spectrum, but I don't think that's where the value is. I agree with you there. I think the value is the sort of, I don't like to use the word paradigm shift or mindset shift, but it is, it's a mindset shift to think serverless first to basically as an engineering leader or a developer to focus on trying to defend your team from doing work, trying to offload that to a cloud provider or a third party. It's extremely competitive in the minutes of launching initial products or if you're in an industry to keep your costs down. So I think we need to focus on trying to leverage things as a service which makes development more like combining lots of different things, which then makes you have to be an expert in lots of different things. And as you touched on earlier, this space keeps changing and we're in a bit of an echo chamber internally in the serverless sort of group where we keep coming up with different ways of doing things.

AWS, for instance, introduces something like Lambda destinations and we now have three different ways of handling the same problem. I have lots of people asking me, should I use SNS? Should I use SQS? Should I use Kenisis? Should I use EventBridge? I mean the answer is yes, but it just depends on scale and many different things. So I think at the moment the space is very much in flux and we need to sort of consolidate some best practices, which is going to come with time. But for me, for now, serverless is a good North star for us to walk towards. So as a term for me, I think it captures the imagination of the place you want to get to. But you're right, I don't think it's particularly useful to try and place everything on a spectrum of serverlessness.

Jeremy: But how do you explain that to somebody who literally has no idea of what we're talking about? You and I were talking earlier and you had mentioned, hey, you're going through airport security and someone's asking you what you do and you said cloud computing and they wanted more details, so if you throw the term serverless out there, maybe they wouldn't even let you into the country because, I mean, honestly, it's just this thing where they're like, what does that mean? And there's all those jokes, there's still servers and serverless and things like that. But there truly is, I think a defining characteristic of what that term means. But it's so abstract. Right? And again, all of these other definitions. So how do you explain that to someone who maybe is new to the cloud or is trying to understand this role of leveraging third party services, the managed services and so forth. And like you said, writing as little code as possible, but probably a lot more configuration.

Ben: Yeah, it's a challenging area. So Theodo as I mentioned, we build digital products, but we do that very hands on with teams. Our goal is not to stay with a client for a long time. It's to empower their engineers to upskill in the areas, help them build their initial architecture, then let them be empowered to build it further. So I get junior developers, experienced developers, people with little sort of knowledge potentially of the cloud or of serverless, and it starts kind of as this book I've talked about, is structured. The first chapter is what is serverless? It talks very abstract, then jumps into some detail, but the ending paragraph says something to the effect of keep coming back to this chapter as you progress through the book because you need to experience some practical examples. We see things like the URL shortener that doesn't actually use Lambda.

We see things like imagery sizing or PDF generation. They're great ways to show people the value of serverless. So generally in training projects around serverless, I sit with a team and talk to them very abstractly about the different definitions of serverless and what the space means and they look at me a bit blankly, but I say, trust me, we'll come back to this in a couple of weeks. We then take part of their system and migrate that to a serverless first approach. Let's say a PDF generator is a classic example that I like to do because you can use Lambda, you can use S3, you can start to use some of the events, and you can start to see a lot of value. You can have a system that before required a complex queuing system because PDF generation is quite complex and it takes quite a bit of compute power, but now we can parallelize that across many Lambdas, so they start to see, okay cool this scale, cool, we're leveraging third parties here, we're doing different things.

And then you come back to the definition and you see them start to understand a bit more and then you withdraw it and go more complicated. You take a microservice with API gateway and you let them start to build out and I think it's really a learning by doing. But the complex thing with serverless is a space of grown is, there's many different ways of doing the same things. So people are almost scared to start because they're not sure that what the best practice is. Best practices in serverless are, and it's quite a confusing space. I don't know if you've found the same thing around best practices recently.

Jeremy: Yeah, and actually Paul Swail wrote a great article lately about, basically don't trust best practices. Especially with a space that's moving so quickly, it's just really, really hard I think for somebody that's new to this to jump in at the level we're at and I think this may be moves us to maybe be on best practices and to this idea of complexity. So we know that there are ton of configuration options when you're building serverless applications. And the hello world ones are super easy, right? You write a little bit of code, you throw it into a Lambda function. Maybe put on the other end of an API gateway, maybe get fancy connected to DynamoDB. Maybe you do some SQS, like you said, and do some queue processing. For the most part, that is fairly simple.

But then let's say that you have a downstream system that is processing off of your queue and you need to throttle that Lambda function. Right? Now all of a sudden you've got all of these settings in the SQS redrive policy where the visibility timeout has to be six times the function timeout and you have to do, set your retries to be higher so that for the initial burst of polling, there's things like that. There's the Lambda destinations, as you mentioned, right? So what happens when your functions fail? Where does that go? How do you monitor that? Where's the retries? What are the retry policies? There are a lot of retry policies, I've just done a presentation on this. There's a lot of retry policies and a lot of failure modes. So that type of complexity, coupled with the term, it becomes really hard for people to start. And maybe that's a question that I have here is, is this complexity increasing to a point where it's going to make adoption harder?

Ben: I think so. And I think people inside the serverless community taking a very sort of purest serverless first approach, which I myself have done. And I find myself a little while back saying there's no way I'm possibly letting an RDS inside this architecture. There's no way, it's not a serverless service. It's going to ruin everything. But sometimes you do have to work with existing technologies. And I think if we're throwing onto somebody who's just moving into serverless, the complexity of DynamoDB and all the other services with the retry policies and everything, we're not going to end in a good place. Which I think is what your recent talk about, adopting non-serverless components in an architecture was very valuable.

And you're right, there's still a lot of complexity in different patterns that you yourself has come up with around how we manage retries. But you don't potentially have to have the perfect retry policy from day one. I think making a start, making some errors and then learning from those. I think people adopting serverless have to learn with the community and that's a challenging thing to say, but at the same time, so many other problems have disappeared that your team can spend time looking into retry policies or all the different services they have to upskill.

Jeremy: Right. Yeah. And so I mean definitely one of the things too, just speaking of people sort of starting with serverless, I think you hear sometimes people think, well, serverless is great for spiky workloads, right? Because it can handle a large spike if you've got a Black Friday sale or something like that as an example. But if you're not somebody who needs that type of scale, then it might just be easier spinning up a lamp stack or something like that on a virtual machine or using Elastic Beanstalk as you had mentioned. But I look at serverless and I look at the ecosystem of tools around it, and it's so much more than scale. I mean obviously scale is important. I mean that's one of the reasons why DynamoDB is so great because yes, it can handle 60 gajillion transactions per second or whatever it is that AWS or that amazon.com has on Black Friday.

But what I think is interesting about approaching serverless as more than just a mechanism for scale is all of that built in reliability and resiliency that is available to you sort of right out of the box. So you see a lot of people, you're encouraging people to move to serverless. But do you see pushback on that where people are thinking that serverless is just something for these spiky workloads?

Ben: No, definitely. And I think a lot of the marketing material about serverless talks about its infinite scale, which sounds amazing, but if you have the problem of infinite scale, you also have good things in your business model. So that's one of the reasons. The book I mentioned, it has the subtitle at any scale because it is for those large companies of huge workloads attempting a digital migration, moving to a vendor driven or they have to handle all these complex retries and amazing scale and all the problems that come with that. But it's also for the small startups who can't spend their time building generic application functionality like use a signup with multifactor authentication and password reset. They need to focus on the actual business value. So serverless is an abstraction, helps them focus on that business value, but it also gives you amazing things like the developer experience.

If you have a stack that's completely serverless through project at the minute with a client migrating a legacy PHP application to a completely serverless stack. Now they were very happy to move to serverless. They wouldn't drop the PHP, but that's for their CTO to live with. I'm joking. But actually PHP has been absolutely fine on the project, because they have a completely serverless stack, they have a CICD pipeline that for every pull request spins up a complete stack runs 300 automated integration tests. And it's not a problem. It runs in six minutes. And so it's an amazing pipeline that they've built. But this also means that they have no out-of-date dependencies in their project. Because every night a cron job goes and updates the dependencies. And if a test is pass, well it propagates set up. And if a test fail, then it goes onto Slack and to ticket for the next day. The development team are empowered by the flexibility serverless gives them. But yes, that comes at the cost of a bit more complexity when it comes to configuration.

Jeremy: Right. And the other thing with complexity too is obviously, I've had this discussion with a few people before as well is, what do we call serverless applications? They're micro microservices? They're nano services? They're these sort of complex beast. But what's great about it regardless of what you call them is, you have a lot of control over configuring each individual function to do something for you, right? So this is the argument that I always make is to say, if you're putting everything in a container, this is the problem with monoliths, right? If you need to scale your PDF processing, and that's running on the same machine or the same container as your login process or your order process or something like that, then when you have a huge batch of PDFs that needed to be created, you have to scale up that entire thing.

And that means that you're potentially over scaling some of these other things. You're wasting some compute. Whereas with serverless, you can say, look, I just have a function that just does PDF processing, and if that has to do 2000 PDFs per minute, then it can, but the rest of my system can live at whatever scale. But that does create quite a bit of complexity. And it also creates sort of, this messaging or this microservice communication problem. So how are you sort of telling your clients to address sort of this new volume of messages that need to be passed around in order to coordinate serverless applications?

Ben: Yeah, so you touched on a few things there. I've seen a lot of, a lot of companies where their code base becomes unsustainable because of the amount of complexity in that monoliths. And the business as a new requirement that analytics wants is events, but then it's two months on the backlog because the monolith is so tightly coupled. Now at Theodo we've started recently formalizing code quality internally across projects and it's sort of a five point model. And one of those points is sustainability. So keeping the code based sustainable, meaning that it's always adaptable to change and people will often make the argument, small companies don't need microservices or microservices and more complexity than they're worth. But when we talk about serverless, the ability for a team, a developer to be able to spin up into production, a Lambda function that does something in production and the real production stack is amazing.

At Theodo we've recently started to, and I say recently, it's quite recently from AWS, start to try and adopt EventBridge, so formalizing the events flowing through our system. This means that we have this sort of centralized event bus events flow through there and that allows your Lambda functions or other services to subscribe to those events and third parties to push events soon in this cool developer experience stuff like a fully typed SDK that comes out of the box, but it's also a sustainability thing. It allows you to say, okay, well these things are all just listening to events. If we define our system by events from the business point of view and we have things listening and doing things with those events. We can add new events later and it's not going to be a problem. It's not going to be a huge friction because it's simply a case of publishing in new events and listening to it and our existing services can choose to listen to it, but that's optional. Our new service can listen to it or can listen to existing events and do different things.

So EventBridge what we're really seeing as the future for serverless architectures. Now like everyone we're starting to try and understand the best practices. A Sheen from LEGO has done amazing work there and recently at the serverless meetup in London, I help run with Ant Stanley, Bob Gregory from Kazoo gave an amazing talk about how his team does event storming to sort of figure out the whole system and then they use EventBridge on top of that sort of formalize how those events flow. And he made a very interesting point that everyone's talking about latency and EventBridge can actually have quite a bit of latency, a few minutes or even longer. But what he talked about was, I mean his business model is giving you a car within 24 hours. So you have a car, you sell your car to them and they ship you the new car. I think, I don't fully understand their business model.

But I mean that's a 24 hour process. Three minutes of latency isn't going to kill you and that three minutes of latency. Okay, I mean that's a worse, I'm not sure the worst case is for EventBridge but something in that ballpark. But EventBridge has given them so much. They have autonomous teams working together to build a product faster than I've seen other companies ever built and building it sustainably. And that sustainability I think comes from the fact they're event driven. And event driven, you don't need to be event driven to use serverless, and you don't need to use serverless to do event driven, but everything makes a lot more sense when you do both at the same time.

Jeremy: Right. Yeah, I totally agree. I mean I am a huge fan of EventBridge and it's funny you mentioned that about sort of best practices and what those best practices are. The EventBridge team themselves are still trying to understand what those patterns are, what are the customers using it for and what sort of makes sense for them to do that. And so for me, I agree. I think that this idea of event driven with serverless, right? The combination of those two things is huge. And before something like EventBridge came out, I was using SNS and I was using SNS basically as a standalone sort of event service where I would send all my events to this event service. You'd have to take that and then distribute that out and create all the subscriptions around that. There's so much more sort of capabilities and options and features that are in EventBridge that I totally agree. I think that is going to be the way.

I know that my latest projects, I've used it to sort of standardize how we do messages or how we do messaging between the different applications or the different microservices I guess if you want to call them that. And so essentially that brings us to this next question. So if EventBridge is this sort of, central component of all future serverless applications that you build and maybe not small ones, maybe there's a few small services in whatever you want to call those, but certainly for the systems that are more complex, like what's the next step? What's the future going to be? I mean this is probably a dumb question to ask, but what is serverless going to look like in five years in your opinion?

Ben: I think it's going to be more abstraction and more consolidation around how to do things. So in five years it's going to be more obstruction around that configuration so you're not having to manually configure retry policies out of the box, you're sort of being able to sort of, well maybe not pointing click, but in a very short amount of yaml we'll be able to have an event driven architecture and maybe that becomes formalized, an event sourcing service rather than just an event bus. Maybe other areas of event driven become more formalized, but it's always going to be an increase in abstraction. We went from virtualization to containerization to function as a service and other things as a service. Now we're sort of building more event driven. We can have obstructions at different levels, so there might be obstructions in particular services or abstraction of the whole architecture.

Things like the serverless framework have tried serverless components, AWS has tried the serverless application repository. And those things have varying degrees of success, but built into all of these services, although we're going to give it more, we seem to have had a spike of complexity recently as so much has been announced and so much has been released. I think we're getting more abstraction. If we take the amazing work you did about integrating RDS with Lambda and you built a whole sort of open source project that really helped people with that. Recently, although there's still a need, AWS has abstracted a lot of that with the RDS proxy. They've seen a need from the community and that abstracted that. If we take EventBridge, people were doing CloudWatch custom events kind of before and hacking it and then they formalize that and provided a level of abstraction. Now is that abstraction going to be driven by the cloud provider or by the community? Well I think it's going to be a bit of both. AWS and other cloud providers are going to add more services but also increase abstraction as it goes on and the community is going to build amazing open source projects that increase abstraction. So I think the move to more abstraction, which means less configuration and configuration is just code. So it means less code to achieve the same business value. So for me it's more obstruction, but right now it feels like less abstraction.

Jeremy: Yeah, no I totally agree. I mean that's one of the things where I started saying this towards the end of last year where what serverless needs now is some sort of abstraction as a service. Something that runs on top of a cloud formation. And even in a sense on top of something like the serverless framework or SAM and you're right, serverless components is one of those things. The CDK, the AWS CDK has some capabilities around that as well where you could sort of formalize, maybe call them best practices and say, look, the standard way that we want to create this particular type of solution or this outcome that we're trying to create, that we want to be able to write this in one line or two lines of configuration and say this is the base use case, this is the outcome, this is what we want it to do.

And then if someone says, oh well I needed to retry more times or I need to do this. Then you can go in and you can start tweaking those defaults. But I totally agree. I think that having a way to encapsulate some larger process that stitches together all of these individual sort of off the shelf products that AWS or other cloud providers give to you is going to create and empower people to not only create something that is highly usable, right? Which doesn't take a PhD or months of study to figure out how all these different components work. But being able to take that and then standardize that across multiple departments in their organization. And that allows you to follow standards, to implement security policies, to implement whatever best practices your organization sees fit and vet those, and then allow for the developers to just easily pick that up and go ahead and do that.

And I know Liberty Mutual has done some things around vetting different AWS products so that their developers go ahead and they could use them that they already met the compliance they needed and things like that. And now I know they're working on CDK patterns that they're going to be able to distribute so that there'll be able to implement these different functions that are these highly reusable patterns and again fully vetted, battle tested, just things that work right out of the box. Just making it super easy for them to do. So I totally agree with you on that.

Ben: So just then we both kept using the disclaimer on any cloud as sort of a get out of jail free card. Not to seem to tie to AWS, but I think we both agree that AWS are really the trailblazer in this space and that's why we're often using AWS terminology to talk about it. As we talk about abstraction and we talk about standardization of these components that are easily used by our teams and abstraction is going to help us be able to actually build applications, not spend huge amounts of time learning configuration. Do you sort of see that abstraction being across cloud providers or still specific to each cloud provider?

Jeremy: Yeah, actually that's an interesting question because I think that what we're seeing is some of these other clouds sort of now trying to play catch up. Right? And I mean there's no, I don't think there's anybody who would argue with the fact that AWS is years ahead when it comes to, at least pioneering the idea of serverless and things like managed services. They are releasing all of these individual services and they're releasing all of these components and adding features that make it easier to build these applications. So thinking about other clouds, what worries me is that, and I like diversity. I like the idea of there being another cloud that can do something different and maybe you can do something a different way in cloud A versus cloud B. But I think that actually deepens the divide and makes it harder for people to understand not only what serverless is, but to develop the skills they need to actually build serverless applications.

Like right now. And not to mention Paul Swail again, but Paul Swail just wrote an article the other day. He's been very prolific lately about venturing away from AWS and using some other services. Like maybe Cosmos DB works better for certain things or Google Bigtable or something like that. And could you use that in combination with your current AWS infrastructure? And I mean, honestly, in applications now we use IBM's NLU API for natural language processing. I use Twilio, I use a SendGrid, we use Stripe. I mean we use all these services that are not AWS services. So really using these other components is not unheard of. And that's sort of how I like to envision multi-cloud in a sense where you're using other clouds but you're not trying to create full parity or some of these other things.

But I think sort of to go back to your question, the biggest problem I see with trying to standardize things across clouds is that one, it's not going to happen. I mean, it's too diverse and I mean the functions as a service in general is such a commoditized thing at this point that it doesn't matter. I mean if you run your functions on Azure, you run your functions on Google or you run them on AWS or you run them on Spot Instance functions or whatever. I don't think that really matters too much in terms of things like vendor lock-in. But what I do think matters is if you choose SQS, which is the simple queue service in AWS, the best way for you to process events off of that is to use a Lambda function to do that.

And if you're using Azure and you're writing sort of complex workflows, then the best thing for you to do is to use logic apps, right? You're not going to use step functions to communicate with Azure functions. So I think that what you're going to see is that people are going to have to pick a provider and learn that provider deeply understand everything from IAM to like, again, all these failure modes and all of these little tricks that they can use in all these different levels of abstraction that they can do. I just don't think that you're going to see that translate to multiple cloud providers. And I don't think Kubernetes is the answer either, because again, that just handles really the compute part of it, which is highly commoditized, but I don't know. I'd be interested in your thoughts on it as well.

Ben: Well, yeah, I think when people think multi-cloud, they think the containerization dream that Docker managed it's standardized containers and therefore we can use Kubernetes to run the same container on GCP as on AWS, as on Azure, as on Alibaba cloud, really any cloud provider. And that containerization standardization allowed that portability, which for some is important. So I'm generally of the sort of view that yes, you might find a way to standardize functions as a service, but there's no way you're going to standardize DynamoDB or NoSQL databases or how are you going to standardize the events flying off them and how they interact. I mean it's just, it's not possible because of the different offerings and you're right, the diversity is good. It's good that people are doing things in different ways. Cloud providers are doing things in different ways, but when you move to a regulated industry, it becomes a bit more difficult.

So in the European Union, there's a regulation that's been passed that banks need to be fault tolerant to at least one cloud provider. Now this is because banking and the economy are very tightly coupled and when banks go down, it's not a good thing for the economy. And as every, let's say, three major cloud providers that these sort of scale of banks could work with in Europe, it's very likely that multiples of these banks are on the same cloud provider. So if one of these cloud providers were to go down, which many of us think wouldn't happen, but obviously it's something to be considered, then multiple banks could go down and the impact could be huge. So being fault-tolerant at least one cloud providers, what does that mean? Well, it doesn't mean that you have sort of a hybrid between different card providers where some stuff is done in AWS and some stuff is in GCP.

It means that you're able to deploy the same infrastructure in both. And I think at the moments, and maybe forever if you want to do that, I don't think you can fully embrace serverless. Yes, you can use Knative for the compute parts, but I think you're going to be stuck with containerization and classical databases until either standardization happens, which I don't think it does or is a more advanced way of doing this sort of multicloud.

Jeremy: Right? Yeah, because I agree with that. I mean, you can't be, this is this lowest common denominator argument, right? I, if I'm going to try to choose something that I can run in multiple clouds, how can I truly be cloud native if I'm using something that is going to be not built for the cloud? Maybe that's the wrong way to say it, but I mean that's, I think that's where you have that, that huge amount of capabilities. If you're on GCP and you're using something like Firestore, right? Like that is an incredibly powerful service that does all kinds of really great things. And if you said, well, I don't want to just be locked into Firestore, so I'm going to go ahead and just do an installed MySQL database or something like that, you're not going to get the features that you get from something like Firestore.

You're not going to get the features in some other, you're not going to get the scale features, let's say, of DynamoDB and some of the capabilities that that has. If you just say, well, we're going to install on some VMs, we're going to install a MongoDB or we're going to install a Cassandra Ring or something like that. So I totally agree with you. I think that in order for you to really embrace the cloud that you have to basically pick a provider and maybe if you've got the resources go great, pick multiple providers, figure out how to build your app cloud native on AWS, figure out how to build a cloud native in Alibaba or Tencent or whatever you're doing. But you have to, if you want to embrace that true cloud native, I think a mindset if we want to call it that, you have to choose the tools that are built to be cloud native.

Ben: And then put those teams in different buildings and don't let them talk to each other.

Jeremy: Exactly, exactly.

Ben: Sure. But yeah, that's a huge cost. But I guess if there's a hard constraint coming from a regulation point of view, maybe that's the sort of cost that has to be embraced or people don't adopt the cloud native architectures. And comparing, which would be a higher cost is probably too complex to do.

Jeremy: Right. That's probably true. But I mean the other thing to think of too is I think if you're developing for multiple clouds and I have not seen any teams directly doing this, but I would certainly think that you're going to need people who are experts on AWS and people who are experts on Azure for example. If you truly wanted to do those separate things, because even if you're installing Kubernetes and you're running Knative and you're doing some of these other things, there are very specific services on each one of these cloud providers that handled different things that kind of deal with them differently. There's different security requirements, there's different protocols, things like that, that you have to learn. So I do think that that if you truly want to take that multi-cloud approach, and I hate the word multicloud I think is a terrible idea in many cases, but I get what you're saying on the regulation stuff that, yeah, I think you, you just, you just have to have a really big team in order to do that.

Ben: I mean how often? We've not been touched on the data duplication.

Jeremy: Right. Yeah. That's another-

Ben: Which is a huge.

Jeremy: Yeah. Yeah, that's a really good point. So yeah. So talk about that. I mean the data piece of it, right? I mean this goes to this vendor lock in thing, right? And people always get concerned of, well, if I choose AWS Lambda function, then I'm locked into AWS. And I would argue, as you just mentioned, that it's really not about choosing the compute, even choosing some of these other managed services. There are duplications for some of these that you could potentially port over. But as soon as you start putting a bunch of data somewhere, you're kind of locked in from a data aspect.

Ben: Especially when you get to the scale that vendor lock in is a concern and that's an immense scale or a regulation requirement which comes at large scale. The volumes of data we're talking about migrating potentially between different storage mechanisms. It's, yeah, I'm not sure it's always going to be possible and that's where it becomes very complicated and whether you do that, incrementally or not. I think for many, my view is you need to embrace vendor lock-in and embracing vendor lock in is called cloud native. They're just different names for the same thing depending on your mindset towards cloud providers. But you need to embrace vendor lock-in and if the day comes where Amazon's going to triple its costs or whatever the fear is, because for many the availability that Amazon provides, it's better than the availability they'll be able to maintain in time obviously.

The data that comes, it's probably going to be cheaper to try and move then rather than try and always have this ability to move, maintaining an ability to move cloud provider for years and it might never happen is not the right move. I would, when that day comes, figure out how to move.

Jeremy: Well I totally, totally agree with that. Awesome. Okay, well listen, Ben, thank you so much for joining me and for sharing your serverless knowledge with the community. And congratulations again for the AWS Serverless Hero distinction. So if our listeners want to find out more about you, how did they do that?

Ben: Yeah, so I'm running a few different initiatives under this sort of brand of serverless transformation. So there's a serverless translation medium. Feel free, check that out. This same podcast recording is going to go out on the serverless transformation podcast which you'll on Spotify. Another good places to find podcasts. And there's also a few open source projects under the Theodo github. And if anyone wants to reach out with any questions, I'm always very available on Twitter.

Jeremy: And that's @EllerbyBen.

Ben: Yeah, that's @EllerbyBen.

Jeremy: Awesome. All right. I will get all that into the show notes. Thanks again.

Ben: Amazing. Thanks, Jeremy.

THIS EPISODE IS SPONSORED BY: Epsagon

What is Serverless Chats?

Serverless Chats is a podcast that geeks out on everything serverless. Join Jeremy Daly and Rebecca Marshburn as they chat with a special guest each week.