Jeremy chats with Mike Deck about Amazon EventBridge and how it works, what it means for the future of webhooks, and how we can use it to build serverless event-driven applications.
About Mike Deck
- Twitter: @mikedeck
- EventBridge Product Page: https://aws.amazon.com/eventbridge/
- EventBridge Documentation: https://docs.aws.amazon.com/eventbridge/index.html
- EventBridge Partner Information: https://aws.amazon.com/eventbridge/partners/
Mike: Hey, Jeremy, thanks a lot for having me.
Jeremy: So you're a Solutions Architect at AWS, and I'm pretty sure most people are probably familiar with what Amazon Web Services does. But why don't you tell the listeners a little bit about yourself and maybe what a Solution Architect does at AWS?
Mike: Yeah, sure thing. So I'm actually on our Partner team, so I work as a Partner Solutions Architect, which means that I work with both our ISV and consulting partners to help them with kind of any technical questions they have, work with our ISV partners on their product roadmaps and how they're integrating with our services, helping them with architectural questions, and things like that. Been at AWS for about four years at this point. Been on the Partner team the whole time, and most recently, I've been kind of specializing in the serverless space. So working with a lot of our great ISV and consulting partners that are doing things around Lambda and API Gateway, as well as with the new service that we just recently launched.
Jeremy: Cool. All right, so speaking of services recently launched at AWS Summit New York, AWS launched this new product called EventBridge, which is sort of this, and you can correct me if I'm wrong, but sort of this cool extension to CloudWatch Events and since you're on the Partner Team or you're the Partner Team Solutions Architect for Partner Integrations with EventBridge, you obviously know probably more about this than anybody else. So why don't you tell us a little bit about EventBridge and sort of what it does?
Mike: Yeah, absolutely. So I think you know, it's definitely accurate to compare it to CloudWatch Events. So really kind of the genesis of this service was that, you know, we saw customers building more and more with this kind of event-driven model, and CloudWatch Events is really a fantastic tool for for doing these kinds of things. I think Forrest Brazeal had a blog post a little while ago about using CloudWatch Events to do awesome event-driven things. We see a lot of customers that are interested in this space. The event for pipelines projects came out, got a lot of traction, and so we realized that, you know, there's really this kind of need to build additional services that make it easier to build these kind of things. So we took the existing CloudWatch Events infrastructure and APIs and kind of extended them to add some additional features around integrating with SaaS providers to create more native event sources that you can use within your AWS applications, and then, yeah, extended those APIs to make it easier for customers to do things like creating custom event buses and patching to SaaS event sources, etc.
Jeremy: So what are some of those use cases then that customers might build with this?
Mike: Yeah, so, you know, I think that one of the obvious ones is: Hey, I want to trigger a Lambda function every time someone creates a new ticket in my CRM, for instance. Right? I want to go and kick off some sort of workflow. Or maybe I'm going to start a step functions or do something like that. Um, we also see a lot of customers interested in doing kind of audit-and-analytics type workloads. So I just want to ingest kind of the full event stream of all of the things that have changed out, you know, maybe in my identity management tool that I'm using. So every time a failed login happens, or every time a new user is registered, I just want to ingest that, throw it into a Kinesis Firehose Data Stream, and put it out into an S3 data lake so I can go and create with Athena or something like that. And then, obviously, you know, ML and and doing kind of AI inference and things like that on all of these various data streams is becoming super popular as well, so this gives you a great opportunity to, yo u know, every time a new email is opened in your kind of customer engagement platform, you can ingest that and add it to your modeler or do some sort of inference on it in order to drive additional kind of business decisions.
Jeremy: Yes. So those are some really cool sort of things that you can do with it. And I remember Forrest's article about sort of using CloudWatch Events and using custom events as basically almost like an SNS topic, in a sense. So maybe let's get into the nuts and bolts of EventBridge, kind of how it works. So with existing CloudWatch Events, you can use a, put events API or put custom events API or something like that, where, and then Forrest explains that in his article where you can send an event and then you can subscribe consumers to it. But why this extension? What's this idea of having separate event buses?
Mike: Sure. So, um, yes. So I guess there's a few different reasons that you might want to have different event buses. So like you mentioned with CloudWatch Events, there's really just sort of the default event bus in each region of your account. We publish all of these sort of native AWS events to that default event bus. And so those just kind of appear in your account automatically without your having to do anything. You know, the ability to create additional custom event buses now lets you kind of segregate these different application domains or different context, I guess, for the various different types of events that you need to ingest across your different surfaces and applications. It also gives you a good way to create separate channels for different SaaS applications that you may have that you're trying to build event-driven architectures on the back of. So, yeah, we can we can talk a little bit more about kind of the specifics of how it all works, but basically, every individual SaaS provider that you're integrated with is going to have its own event bus so that you can write rules against that, and keep all of those different events separated from the different sources.
Jeremy: All right, so I definitely want to get into the SaaS side of things because I think that's probably one of the coolest features of this EventBridge. But let's go back, maybe, and start at a lower level, talk about events, sources and targets. So what events were you, because you mentioned that you can get all of these events that are happening within your AWS account. So anytime a new account's created or new resource is created or something changes, there's these events that are sent to that default event bus. But now with this separate event bus, how would we subscribe sources to that or set targets to trigger Lambda function or other other services?
Mike: Right, right. Yeah. So EventBridge works with this concept of rules. So every event bus has a set of rules associated with it, which allow you to essentially select the events that are interesting. And then from each one of those rules, you can you can create multiple targets that it's going to forward each of the events that match that rule down to. So, for instance, you may decide I want a rule that's going to match all of the sort of ticket created events from ZenDesk. And so you would build a rule that has in it, you know, the detail type equals ticket created and then you'd associate, you know, potentially multiple targets for that. Maybe I want to send all of the new ticket records down to a Kinesis stream so they can go and be processed by some other kind of downstream analytics process, and then I also want a trigger, you know, a step functions workflow. So each of those would be targets and you would associate those two targets with that one rule. And now, any time that ZenDesk publishes a new event to your account that is of type ticket created, that's going to match that rule and then send that downstream to those specific targets. So you can have multiple rules and multiple targets per rule in order to kind of build these sophisticated event-routing models.
Jeremy: So you keep mentioning like SaaS providers, like ZenDesk and stuff, and I totally want to get into that. But before we do that, so this isn't not just for SaaS providers, though, right? I mean, you can create your own custom events. It's sort of, and again this may be in comparison to Forrest's article, but you can sort of use it as SNS almost or like an SNS pub-sub type thing. So maybe explain, you know, what's the difference between this and something like CloudWatch Events or SNS or even Kinesis?
Mike: Yeah, I think, uh, So I think SNS is probably the most similar sort of service, if you want to think about it that way, in terms of SNS gives you the ability to publish messages and then fan those out to multiple subscribers. SNS has the concept of a subscription policy that allows you to kind of filter messages per subscription. Again, you can get similar kind of features to the way that rules work within EventBridge. You know, the big difference on the SNS side versus EventBridge is for custom events anyway, the downstream targets that you have accessible within SNS is more limited than what is available in EventBridge. EventBridge has 17 different AWS services that you can integrate with natively so you don't have to sort of pass through a Lambda function necessarily, if you just want to go and yeah, drop something on a Kinesis Sream or FireHose or kick off a step function, etc. So I think that's one of the big key differences there is the sort of richness of the kind of targets as well as the source piece that we'll talk about here a little bit. I think where SNS really shines is when you're in these super high throughput or really massive fan-out. So if you've got thousands or millions of subscriptions that you want to have for a single topic, SNS is definitely the way to go. Similarly, if you're really trying to push, you know, millions of TPS or something like that through a particular topic, SNS is a better option for when you've got those really kind of massive, high throughput workloads. So those are kind of the key call-outs. And then, yeah, talking about Kinesis a little bit. So Kinesis gives you more of a streaming model, so everyone that's going to consume that stream is going to see every single message on that stream. You know, you're somewhat limited in total number of people that can consume a single stream. And each individual consumer would be sort of responsible for kind of filtering out any messages that they weren't potentially interested in.
Jeremy: And so with SNS, so I think that's that's really good point you make about the services that can be triggered off of EventBridge because with SNS again, you can't trigger - you can't start a step function. You have to write a Lambda function that then calls that step function. So there's just extra sort of processing in there. And then with Kinesis, that's sort of interesting, because this is probably one of, was my biggest question when I first found out about this product, was this idea of sort of event buffering, right? So we typically, especially we have downstream services that might not scale as much as our Lambda functions would, we would put a queue or Kinesis stream or some way to buffer those events. There's no event buffering yet, right in EventBridge? You would still use something like SQS if you did need to buffer events.
Mike: Exactly. That's a great point. So certainly using those types of services, SQS, Kinesis, SNS, like all of the standard sort of messaging services that you typically think of often times, I think, get used in conjunction with, you know, previously CloudWatch Events, and now EventBridge. So certainly having a rule whose target is an SQS queue so that you can buffer all of those events and then have you know, each individual consumer have their own queue that they can work off of makes a ton of sense. So using SQS as that kind of simple queue that gives you gives you that message durability, and all of the delivery and that dead letter queue semantics. But then using a EventBridge to really manage kind of the message routing and filtering pieces, which is where really it kind of excels. So super common pattern there, definitely.
Jeremy: All right, super important question here. How is this priced?
Mike: Yeah, so pricing is exactly the same as the existing CloudWatch Events custom event pricing. So dollar per 1,000,000 events that get published into your event buses basically. So if you're publishing custom events yourself then it's exactly the same as if you're publishing to the default CloudWatch Events bus. Yeah, a dollar per 1,000,000 there. And then similarly, if you've got this hooked up to SaaS event sources, as long as you have an event bus hooked up to that event source, you're charged for sort of each event that the SaaS partner publishes to your account.
Jeremy: Okay, cool. All right. So let's talk about SaaS partners, because I think this is one of those things that I definitely don't want to get lost in this announcement. And there's so many products that AWS comes out with on a regular basis that sometimes it's hard to keep up. But I think the coolest innovation with EventBridge is almost this idea of completely getting rid of this concept of webhooks, right? So now, with the ability for partners to put events into an event bus directly, and we can talk about how the authorization of the stuff works, but this is essentially what it's doing, right? You're getting rid of webhooks?
Mike: Exactly. Yeah. So instead of, you know, going out to pick your SaaS partner of choice, you know, I go out to Datadog or whatever, and I go and configure a webhook. And when I do that, I have to give him a URL and maybe set up some sort of like secret token, etc. Now, instead of that, I'm basically going out to that same partner and just saying, "hey, I want you to send it to my AWS account. Here's my account ID. And here's the region that I want you to to send these events to." And then we kind of take care of everything else there for you. So now you don't have to go and stand up an end point. You don't have to stand up a Tomcat server in your VPC, or ideally, obviously, I guess we're talking on a serverless podcast, so we'd be using API gateway and Lambda, right?
Jeremy: We would, we would. Yes.
Mike: But still, that's not a trivial thing to do necessarily. I mean, I shouldn't say trivial. It's easy to do, but they're still, you know, that's still another thing for you to manage, and another thing for you to kind of another hoop to jump through, as opposed to, you know...
Jeremy: And there's cost involved, right? The typical way that I would set up a webhook is most likely API gateway. Actually, if I was doing a high volume webhook, I would probably use ALB at this point just because it would be a lot cheaper. But I would do that, probably hit a Lambda function and then right into an SNS - sorry, SQS queue - in order to go into a database or something. Or maybe I do a direct service integration with API gateway. But with this, you don't have to set up any of that infrastructure, you're just paying for — is it just reads off the stream or writes to the stream as well?
Mike: It just writes to the event bus, so it doesn't matter how wide you fan out on the back of the event bus. You're only paying for each message that's sort of getting published to it. So yeah, I think that's a great point. And honestly, I mean, I think just the sort of the reduction in developer friction too is actually a huge part of it. So I mean, how quick can you get something stood up where, hey, I just wanna, you know, trigger a Lambda function every time a new object comes into S3. We kind of want that same experience no matter what that event source is, whether it's something from inside of AWS or outside of AWS, we want you to be able to just super quickly say, "hey, yeah, there's this event source. Let me attach my Lambda function. Let me attach my step function state machine" or whatever the case may be, and you're off and running.
Jeremy: So how does the customer go and actually build an integration with, you know, partner XYZ?
Mike: Right. So yeah, for partners that have support for EventBridge, basically, it's a pretty simple process where you go out to the partner's portal or, you know, developer portal or console or whatever they've got, and provide them with your AWS account ID. As part of doing that, the partner is then going to go and create what's called a "partner event source" inside of your account with that account ID. The nice thing about this is they don't have to, you don't have to give them a cross-account IAM role. You don't have to kind of mess with any of that type of permissioning. They'll essentially create that. Then when you go back to the EventBridge console, you'll see a list of all of these new events sources that are available. So, you know, the SaaS partner that you went to will pop up in that list. You can check the box and say, "I want to associate this with an event bus" and then the event bus is there. It's ready to go. You can start adding rules and attaching targets, and start building from there.
Jeremy: Alright, so what about infrastructure as code? Can we set up these rules with CloudFormation at this point?
Mike: Yeah, so CloudWatch Events has CloudFormation support today. You can create rules and targets there on EventBridge for custom event buses. That will be coming soon. It's not available right this minute, but we're definitely planning on adding that.
Jeremy: Alright, cool. Alright, so now, and actually, let me ask this question first: retried behavior, right? So you have 17 different sources that this thing can trigger, or 17 different downstream targets that this can trigger, things like the two retries for Lambda. That's still applies?
Mike: Right, so, yeah, if you think about it, really what EventBridge is going to do is it's going to, for every target that you've got configured on a particular rule, it's gonna make sure that the event gets delivered to that target successfully. Now, in the case of Lambda, what successful delivery means from event bridges perspective is that it was able to asynchronously invoke your function. So when it gets a success back from the Lambda service saying that, "hey, yeah, the invoke call you made with successful." You know, 200, good job. EventBridge considers that event delivered, and it's done. So then, at that point, you're really kind of relying on the standard Lambda retry policy within that kind of async event - I'm sorry - async invoke flow. And so if you've got a dead letter queue configured on your Lambda function, etc., it's all going to work exactly the same as you expect.
Jeremy: And what are the retry policies on SQS queues or Kinesis or step functions, things like that?
Mike: Yep. Basically we'll retry for 24 hours to make sure that the event gets delivered to whatever target you've got configured. You know, again, depending on the service, that ultimately just means that we were able to successfully kind of hand it over there. So you know, when you think about like an SQS queue, that's ultimately just, hey, we're gonna go and make sure that the SQS service now has successfully accepted the message that we send to it. And then obviously the retries and all of the downstream processing is going to be up to you and how you're pulling SQS, etc.
Jeremy: Sure. So we can assume, though with the distributed nature of this, that it's an at-least-once-delivery type model, as opposed to exactly once.
Mike: Correct. Yeah. So this is an at-least-once-delivery.
Jeremy: So make sure you designed for item potency and things like that.
Mike: Right. Yeah, exactly.
Jeremy: And then just I guess one question on that, that I'm thinking of now is so after the 24 hours, if for some reason a service doesn't accept the event, is there some sort of concept of a dead letter queue for events yet or?
Mike: No, no not within EventBridge. So right now, yeah, that was basically, we're going to try for 24 hours, and if we can't do it then consider that to be a failure and that, yeah, there's not right now a good way to kind of react to that. I think in the vast majority of cases, you know, 24 hours is plenty of time for...
Jeremy: Yeah, I would think so.
Mike: ... the services to recover because again, the downstream services that you're integrating with, you know, you're not using just like a standard HTTP endpoint, like the way that SNS would. So it's kind of native AWS services that you're relying on there.
Jeremy: Perfect. All right, so let's talk about partners for a second, because again, this is something that's really interesting. I think, you know, every SaaS provider out there that does webhooks now has some way to configure webhooks. So this is obviously something that they're gonna need to build themselves in order to integrate with AWS, which again, I think the demand will be there. So it'll likely be a smart move on their side to do this. But maybe we can talk a bit about how a partner would go about building one of these integrations.
Mike: Sure, Absolutely. So, yeah, like I mentioned kind of before, you know, the standard customer flow is they're going start at the partners site. So if you go to the EventBridge console, you'll see a list of all the partners that are integrated, and so you can kind of get a quick link back over to the partner side in order to go and start the flow. But really, things don't start until you're over on the partner side so if your partner and you want to build this integration, basically, you just need to have kind of a form inside of your developer console that that allows a customer to specify: hey, this is my AWS account, and this is the region that I want you to send events to. And then when they sort of submit that to you, you make one API call, that's create partner event source, where you basically pass in a name of the event source and the account ID that your customer provided you. So you're essentially creating this new handle that you have access to publish events to, and that is kind of that bridge between your account as the partner and the customer's account without having to do this kind of IAM cross-account role assumption dance, etc., right? So once you've created that event source, the customer then can go back into the AWS console. They can associate that event source with an event bus that they want. But as soon as you've created that that event source as the partner, once that source is there, you can start sending events to it immediately, using an API call putPartnerEvents. So it's basically the same thing as putting a custom event to the default bus. You just get to specify this kind of partner event source instead of a specific event bust in your own account. (22:31)
Jeremy: And a new event bus is created for each partner. And actually a partner could create multiple event buses. Correct?
Mike: Correct. That's a good point. So, a partner, so any individual partner could create multiple event sources that are associated with your accounts. If you wanted to have separate events sources, you know, kind of a common ah example of this would be, you know let's say it's an HR system that's publishing, you know, multiple kinds of events. Maybe some events are about really kind of sensitive data, so salary information about individual employees, and then you've got other events that are about time off requests or something like that are not nearly a sensitive, those could be directed to different events sources, and then each event - each partner event source that gets created in your account will have its own dedicated event bus that you create.
Mike: And so then, by doing that now, you can create, yeah, specific kind of security policies on each one of those event buses. So I can have, you know, my sort of sensitive events stream that's locked down. And I can have the other one that maybe is more open and enable for other people to go and write rules against more freely.
Jeremy: Alright, so that's a really cool feature. So I do like that sort of almost it makes it super easy for the partner to integrate where they just have to make that one API call, and then you handle all of that sort of authentication and authorization on your side that can be done back with, you know, done by the customer requesting that stuff. Alright, so that's really cool. So what's the process though of becoming a partner? Because it sounds like you have to sign up for this?
Mike: Right, So right now today, it's a bit of a kind of custom request process. So we absolutely want to onboard more partners and there is going to be a process for kind of going through that, and that's documented our website. I'll make sure that you've got a link for where people can go if they want to sign up to become a partner. But overall, it's kind of a matter of saying, "hey, I want to do this." You know, this is my domain name that we'll use to kind of identify your integration. You'll go out and build things. We'll do a quick validation with you and then kind of finalize onboarding at that point. Obviously, we kind of want to continue to move this forward in automating that process, making it more self-service. But yeah, as of today, if you're interested, definitely, yeah. Go check out the link that we'll share. And you can get started that way.
Jeremy: Alright, well, I'm going to make a public service announcement. I'm going to put every SaaS company on notice. I'll give you, like, three months. If you're not integrated with this, I'm going to stop using you and use somebody else who does. Because I just see this saving a ton of money, making the developer experience so much easier. Yeah, I just I love this. So anything else on the partner side? You know, I'm sure there'll be a bunch of documentation or there probably is a bunch of documentation about how to do all this stuff. Seems pretty easy.
Mike: Yeah. I mean, I guess the one other thing to note about that, it's just, you know, when you think about webhooks, I guess, and kind of all that goes into that, everything from, you know, doing the security, doing retries, kind of managing that yourself, tends to actually get pretty complicated if you want to do it right. I think it's really easy to throw a webhook out there, but, you know, if you're just using a simple auth-token in a header somewhere, that's a pre-shared key, you know that there's certain vulnerabilities there. Also I'm really doing the appropriate, you know, retry semantics and everything. It's way easier to sort of offload that onto our plate. You know, that's one of those classic, undifferentiated heavy-lifting things that we love to solve at scale.
Jeremy: Yeah, absolutely.
Mike: Yeah, anyway would love to work with official partners that want to.
Jeremy: So serverless webhooks. I think that, because honestly, I mean, even if you built serverless webhooks in the past, with all these other serverless features just consolidating that all into one simple call from the from the API provider or from the SaaS provider. And just the other thing I love about that too is there are so many instances where you lose events with webhooks and things like that because whether your infrastructure, something goes on with the infrastructure, or whatever, this is just one of those things where the partner, I'm assuming, will be able to build in their own retries and things like that to make sure that they get delivered. I think you just, you know, again, not that webhooks aren't scalable. But there's just, to me, this seems like just an awesome innovation. So totally, you know, congrats to you guys for coming up with this or for implementing it, at least because it is very, very cool. Alright, so maybe we can just switch to, let's talk less about EventBridge in general, and maybe just kind of talk about event-driven architectures. So maybe for people who aren't familiar with this, because we might have been talking over people's heads, with events passing around and triggering Lambdas and doing all this kind of stuff, maybe you could explain in a couple of sentences what exactly is event-driven architecture and kind of how it fits into serverless too might be good.
Mike: Yeah, absolutely. I mean, I think that it's probably easiest to understand it when contrasted against kind of a command-driven architecture, which I think is what we're mostly sort of used to. So this idea that I've got some set of APIs that I go out and call and I kind of issue commands there, right? So I maybe have an order service. I'm calling create order or I've got downstream from that. There's some invoicing service now, and so the order service goes out and calls that and says, "Create the invoice, please." So that's kind of the standard command-oriented model that you typically see with API-driven architectures. An event-driven architecture is kind of, instead of creating specific, directed commands, you're simply publishing these events that talk about facts that have happened, you know these are signals that state has changed within the application. So the order service may publish an event that says, "hey, an order was created." And now it's up to the other downstream services to, they can observe that event and then do the piece of the process that they're responsible for at that point. So it's kind of a subtle difference, but it's really powerful once you really start kind of taking this further down the road in terms of the ability to decouple your services from one another, right? So when you've got a lot of services that need to interact with a number of other ones, you end up kind of with a lot of knowledge about all of those downstream services getting consolidated into each one of your other kind of microservices, and that leads to more coupling; it makes it more brittle. There's more friction as you're trying to change those things, so that's a huge kind of benefit that you get from moving to this event-driven kind of architecture. And then in terms of kind of the relationship to serverless, obviously with services like AWS Lambda, you know, that is a fundamentally event-driven service. It's about being able to run code in response to events. So when you move to more of this model of hey, I'm just going to kind of publish information about what happened, then it's super easy to now add on additional kind of custom business logic with Lambda functions that can subscribe to those various different events and kind of provide you with this ability to build serverless applications really easily.
Jeremy: Yeah, I like how you, you know, and I've heard like Danilo has said this in the past, which is great, you know, calling them facts, like these events that come through are these —they don't change, right? That's just something happened and you know about it, right? So rather than updating a record in a database that gives you the current state of something, this is that sort of that ledger, right, that immutable ledger that, if you think about it, that has all of that information attached to it. And the other thing, too, that you know, I think this is what trips a lot of developers up, and it's funny, too, because it's hard once you're in it, and once you understand it, it seems very logical. But I think if you take a step back and you look at it through someone who's not familiar with this, the idea of asynchronous invocations is something that I think just, kind of again, it trips people up because we're very used to this request-response model. But the asynchronous nature of a event-driven applications is you just kind of put something out there into the ether and something else picks it up and does something with that. So maybe, a question for you, I guess, is what are some of the patterns or best practices for building event-driven applications? And I don't know if we can cover that in a few minutes, but it's maybe the top line ones. And maybe, actually, how it would apply to EventBridge?
Mike: Sure, yeah. I mean, I think one thing that we already talked about a little bit is this idea of having, you know, kind of an event store, if you will. So this would be something like SQS that that provides this sort of durability of the events that are getting consumed by by an individual consumer. So, like you said, you know, there's this potential for me to create this event. I throw it out in the ether, and then what happens if one of these downstream services, that's really important that it gets this event, happens to be down when that event is produced. So using things like SQS to kind of provide that durability of those events so they can be picked up when that service comes back. I think it is a huge, you know, that's a really important practice to make sure you're thinking about sort of what is the durability needs of the events that I'm producing on. And then a nice side effect of that is now you get this sort of, you know, much better sort of availability characteristics of your system overall, because now, even if I've got an upstream service that's relying on sort of downstream behavior, even if that downstream service happens to have, you know, an outage or is not responding very quickly, maybe it's just in a reduced capacity state, I can continue sort of doing my job. It can continue responding to its clients and customers and continue to operate normally, and then whenever those downstream services come back, they can handle it. The sort of converse piece of that is, of course you need to think a little bit about sort of this eventually consistent data model that you're going end up with. So you're going to have, you know, each one of your services probably has its own data stores, because events are now propagating asynchronously through your system, each of those data stores may have kind of a different version of the world at any given time. And so just kind of being aware of that, understanding it, like you mentioned kind of having this ledger of events and using that sort of event-sourcing pattern to keep track of the state of this system, ends up being really powerful in those scenarios, because now I have this ability to kind of, you know, manage this state of the world as I understand it in a point in time, and then also the ability easily to sort of roll back or understand what the state of the world was at some kind of past point time. Anyway, again, yeah, that's kind of the nutshell description of a couple of interesting practices. Obviously, yeah, we could talk all day about that.
Jeremy: Yeah. I think we maybe opened a huge can of worms. I think the only other thing I would say too just in the context of EventBridge. So I have always, I wrote a post called Serverless Microservice Best Practices for AWS and one of those microservices I define is SNS as being its own sort of microservice in and of itself. Because you could use that to bridge, you know, between multiple services. So if you wanted to use it to choreograph events between multiple services as opposed to using step functions to do orchestrations or something like that, it's a very good way to do that. And with some cross-account capabilities and things, it makes it very interesting. I think EventBridge actually solves that problem and does it better, right? Because now you don't have to go in and specifically create some SNS topic that maybe lives outside of all these other services, which is sort of a weird thing where I often find myself publishing a single SNS topic as its own microservice, as its own thing, and using that with multiple microservices to communicate with one another. So I like how this is just kind of there, and you don't have to worry about it, and like, "oh, did I create this?" or "didn't I create that?" And I'm assuming, you know, you can create as many of these custom event buses as you want. So if you did want to follow that serverless sort of stages mentality, where you have your dev stage, you have your production stage or staging stage, and maybe if those were in separate accounts, actually, those event buses would be in separate accounts anyways, but I think you could do something similar to that if you wanted to.
Mike: Yeah, absolutely. So just to be clear, there is a limit of I think 100 event buses for account by default. So, yeah, it's not necessarily completely unlimited, but you can definitely create a number of these custom event buses, and use that just as you were describing.
Jeremy: Well, now with Control Tower, we could just create many accounts as we want to.
Mike: Yeah, I mean, you kind of a joke, but I think that's actually definitely the sort of the best practice that's going to continue, as account creation becomes easier and easier, definitely having that kind of segregation is huge.
Jeremy: Totally agree. Alright, well, listen like you said, we could talk all day, But why don't we wrap this up so again, thank you so much for joining me, and obviously, just telling us about EventBridge because I think this is a really, really cool innovation. So why don't you tell the listeners how they can maybe find out more about you, and actually, probably more importantly, no offense to you, but how they can find out about EventBridge?
Mike: Yeah, of course. So, I'm on Twitter @mikedeck. Not super super active on there, but I'm more than happy to respond to anyone whose got questions about this topic or others. And then certainly going to the kind of standard EventBridge product page, so aws.amazon.com/eventbridge would be the best kind of place to start off. And then you could certainly just go to the consoles. Well, if you just want to go jump in and start building, that's kind of how I always get started with new AWS services myself, so I would encourage you guys to all go and check that out. And, yeah, I love to hear your feedback, hear about what you're building.
Jeremy: Awesome. Alright, well, we'll get all that into the show notes. Thanks again, Mike.
Mike: Thanks a lot, Jeremy.
What is Serverless Chats?
Serverless Chats is a podcast that geeks out on everything serverless. Each week, Jeremy Daly chats with another serverless champion to explore and do a deep-dive into specific topics in the serverless space.