Watch this episode on YouTube: https://youtu.be/t3KHoCAVazUTranscriptJeremy: Hi everyone. I'm Jeremy Daly, and this is Serverless Chats. Today I'm speaking with Michael Behrendt. Hey Michael. Thanks for joining me.
Michael: Hey, Jeremy. Thanks for having me.
Jeremy: So you are a distinguished engineer, chief architect, Serverless IBM Cloud at IBM. So why don't you tell the listeners a little bit about your background and what you do at IBM?
Michael: Sure. Thank you. So I've been working at IBM in various technical roles over the last 15 to 20 years. I have been in product development, product incubation, I've been working in the field as a workload architect. And for the last 10 years as well I've been working in the Cloud division in itself, working on various topics, incubating it and so on. And since about six years now, I'm really focused on serverless as a topic as a whole. So that's what I'm doing most of my time. Working with customers, working on product development, making architectural decisions, technology decisions, and so on.
Jeremy: Awesome. All right. First of all, I want to thank IBM for sponsoring this episode. So that's great continuing to support the community and continuing to invest in serverless. And when it comes to serverless at IBM, you are the guy. You were there right back in the beginning. I had Rodric Rabbah on the show a couple of weeks ago. And we were talking about how it all got started. But I know you have a bunch of stories as well. So what if we go all the way back and start that sort of six years ago and talk about how did it begin? How did serverless at IBM sort of get kicked off?
Michael: There is some interesting stories there. So long a while ago now, I've been looking into the serverless market as it was evolving, what was happening in the field, what customers are doing. And I felt like we need to do something in the serverless space as well. And by purpose, I thought we shouldn't be starting this as a right off the bat product development effort, but rather since it was such a new space do some exploratory stuff first and have it really open-ended in terms of what we are going to end up with from a technology perspective.
So I was in Beijing for a business trip and I had a call with a VP for research at IBM for Cloud. And I still remember it was 10:00 PM at night. And we talked about we need to do something in that space. So we agreed on that call, let's do something in that space. And he basically then brought in a team from the research side, Rodric was part of the team to kick off that whole effort.
Jeremy: Right. So I don't think I've ever heard a story that starts 10:00 PM in Beijing, ever heard a story that didn't end, or it didn't have an exciting ending to it. So all right. So you brought in this team to kind of start working on it. And so what did you do first? What was the initial goal? I mean, you were surveying the market, doing the research, as you said. So sort of, how did you sort of take those first steps?
Michael: So we put together this team of really talented people in research, and we basically set up our goal. It's what do we want to accomplish from a workload perspective? Which kind of workloads do we want to support? We want to allow composition of functions, something we are talking about these days as well, but it was like a new concept back then. We wanted to be able to be very flexible in terms of which kinds of workloads people can run. Should it only be functions or should it be more cost in the workloads as well? So we went into different directions.
We looked at non-functionals like, how quickly should it be possible to deploy a new function or update a function to have a very quick interloop development cycle. And that drove lots of technology and design decisions. And we've been running that with playbacks every week I believe, where the team played back to a broader group of people like what they were doing, their findings and so on. And we iterate it all way towards into that. And one of the big milestones that was at the end of this first wave was OpenWhisk, as an open source project.
Jeremy: Right. So what were some of those early use cases? Because that was one of the things when serverless sort of first started coming out. And again, OpenWhisk is a fast, functions as a service similar to Lambda or a Google Cloud Functions, things like that. But what were those early use cases? Because I remember way back in the beginning, it was very, very limited.
Michael: Yeah. So I think one of the first use cases was, and that is a bread and butter use case these days as well, still it's those HDP endpoints. That was a very broadly applicable horizontal in many industries applicable use case. Another one that I still remember the specific customers we were working with in these days was data processing, like objects or photos in particular that had to be processed in a certain way, like auto cropping, auto sharpening, object detection, storing metadata.
And I still remember we had one of our very first customers, they went GA while we were still in beta. And so because they felt good with what they had. And I still remember talking to the CEO one time and he said, in the early days, their operations guy talked to him and asked whether our billing engine was broken because the bill was so low. And they came in from a past background. So they moved from a past to function as a service and what they saw was 10X performance increase in combination with 90% cost reduction. And that was just astonishing to them which they had never seen before.
Jeremy: Right. Oh, that's amazing. So we can't talk about functions as a service without sort of talking about the 800 pound gorilla in the room, which is AWS Lambda. But I know that, and this is something I actually really appreciate about what's happening in the serverless movement right now, is that people are looking at it slightly differently. So while everyone's trying to come up with a definition, sort of how people are applying it and how they're looking at it, there's a lot of diversity there, whether it runs on Kubernetes or whether it runs on its own VMs or whatever it is, or the V8 engine, if there's something like CloudFlare Workers. So there's a lot of I guess, difference of opinion, but in a good way. So I'm curious, looking at something like AWS Lambda which I know talking to Rodric that's sort of triggered like, "Hey, we need to do something as well." Just what's the different philosophy there, I guess?
Michael: Yeah. So I've been talking to many, many customers over the last years. And many of them are using serverless, but many of them are still not using serverless yet. And one of the biggest inhibitors I heard frequently was, we would love to use serverless, but you're too constrained in terms of memory. You're too constrained in terms of CPU. You're too constrained in terms of execution duration. You're too constrained in terms of your programming model chasing in and chasing out. You're too constraint in terms of XYZ.
So while people love the attributes of serverless, they do not always like the constraints that come with it. And the attributes are, I think what is dominating these days the conversation in terms of, I don't need to manage infrastructure. I only care about my code artifact. I never pay for idle. I only pay for what I consume versus what I allocate. And those are foundational attributes of serverless that in my world define today what serverless is. And they can be applied to a much broader spectrum of workloads than just what you can handle with functions only.
Jeremy: Right. Yeah. And it's sort of an interesting balance because for me, I really like some of the constraints of serverless. I like that it's event driven, you know what I mean? I like that they don't run for 10 hours or something like that. That you have some of those constraints that almost forced you to think differently about building your application. But on the other hand, I can see why certain customers would say, if we wanted to move to this as a primary compute model that we would have to have different ways that we could overcome some of these sort of artificial limitations.
Michael: Yeah. And sometimes it's a performance thing. If you can get more processes within the same process space, they can do data sharing within that. One of the constraints that has been imposed a lot in the early days of serverless was there was no node to node communication possible. So it was very hard to build up anything that had latency sensitive communication required between the nodes of that distributed deployment. So I think there is lots of goodness in terms of trying to stick as closely as possible to the constraints that were kind of established as part of the almost manifesto of serverless as it was established, but still have the freedom to go beyond that if you want to do that.
Jeremy: Right. Yeah. And I think that's interesting to say, if you need 100 cores, when it spins up on a function for two seconds or something like that, that'd be interesting to have. So let's talk about that. So let's talk about some of these other applications that IBM is looking at trying to expand serverless into. What are some of these other application types?
Michael: Yeah. So what we're seeing a lot these days is like I mentioned before data processing. But not only data processing in a necessarily embarrassingly parallel way, but also data processing that requires more tight coupling between the processing entities. If you just want to do a group buy or join or something that the trust requires more data sharing between those components, that's something we are seeing. And then all sorts of workshops like I said that go up in the high double digits of gigabytes of memory, or that require longer execution times. Lots of stuff in the AI and ML space both in terms of serving, but also in terms of training. So I think everything around data in the broadest possible sense, be it data pre-processing, be it data analytics or be it AI, ML, it's something that we are seeing a big uptake on.
Jeremy: Right. Now what about some of the customers that are using this now? So what are you seeing them doing with sort of the expanded capabilities that IBM has?
Michael: There is some low hanging fruits for people to get into the serverless space. And what I think is important as well is to, from a customer value proposition, I talk a lot to large enterprise customers. From a customer value proposition there is value in running a dozen HDP endpoints on a serverless platform, but data only consumes up so much capacity, right. HDP endpoint is usually not overly huge in terms of its resource footprint. But when you think about data processing or data analytics workloads, they can be really big. And so what I see customers are starting to do is, looking at those workloads ... also batch for example, is a typical case.
Not batch in the necessarily traditional sense where you have a batch run start at midnight and it only processes then, but maybe continuous batch in the sense of, since I'm serverless it doesn't matter whether I spin it up all at midnight or always on demand at a point in time when I need it. And so from that perspective customers are seeing value in taking forward those kinds of workloads to say, "I can get it more real time maybe almost close to interactive for certain use cases." Where in the past I had to wait for a few hours to get something done. Now I can sit in front of the screen and wait for it because I'm getting 1000 cores instantaneously.
Jeremy: Yeah. Well, and that's one of the things that I really like about an ETL task with serverless would be this idea of running things in parallel as well. So there's some cases where you can't just split a job up into parallel and hope it all finishes in 15 minutes with I mean, now you can get 10 gigs with Lambda. But I mean, I certainly see there being a huge benefit to saying, well, maybe a job has to run for 30 minutes. Maybe it needs X number of cores or whatever it is. And being able to sort of parallelize that out. So is that something though that you're seeing the sort of the mindset is, all right, we can use these serverless compute and scale them up really, really large, and be able to do that. But are you still seeing people saying, "Well, if we parallelize them, we can do this much faster?"
Michael: Absolutely. The rethinking of certain ways of doing data processing it's a very interesting one. To give you one example, and it's quite popular specifically in these days. There's one customer, European Molecular Biology Laboratory, quite a mouthful of what the name is. The interesting part is they're doing life sciences research basically. They are dealing with certain data that's taken out of the body, how cells work, how will the internals work? And they have large data sets and they want to be able to process them. And the folks working at EMBL, not all of them are computer scientists.
Many of them have expertise in a different domain, but they use the computer science as a means to an end to do their job. And so what they started doing is, they started to rethink how they analyze these metabolomics, it is called. Those metabolomics and data sets, how to break them down and process them in parallel with cloud functions on every other side, instead of traditional ways. And that gives them both the benefit of it's much cheaper, it's way more cost effective. And they did a very detailed analysis about that. And at the same time, the folks working there get more productive because they get the results faster than they did before.
Jeremy: Yeah. That's amazing. And I love this idea of again, just the parallelization of jobs that would normally take, I can think like Hadoop jobs or things like that, they would just take hours and hours to run and then you'd wait and then say, "Oh, I think I didn't write the query correctly," or something like that. And then you end up with junk data and you have to do it again. So that's amazing. All right. So there's a whole bunch of customer examples I know that you have. And we can talk about some more of those, but I'd love to start talking about IBM Cloud Code Engine and what that is and how that helps people go serverless.
Michael: Sure. So when we made this observation that I described in terms of customers wanting to have the serverless value propositions, but have the ability to go beyond the constraints they were imposed on. We made it a few years ago. We started this effort which is now in the market as IBM Cloud Code Engine. And it's basically our implementation of this mindset of, I call it serverless 2.0 which is preserving the value propositions, but giving people the freedom of high amounts of CPU, memory disk, long duration times and so on. And we're focusing on initially again, HDP serving workloads, of course. We are focusing on batch workloads. And we offer people the ability to run their container whatever it is on our platform.
So with that it becomes a general purpose capability as well. And they don't have to make trade offs. In some other places you have to make a trade off whether you want to have scaled to zero and very fast scaling, or you want to have large amounts of capacity. What we are trying to do with Code Engine is to not force people to make trade offs, but to say, I can have all of those value propositions in a combination I like versus having to choose between one or two or three different offerings.
Jeremy: Right. Yeah. And so now with Code Engine, basically you can write code like you normally would with a serverless function if you wanted to. Right. You could just upload a snippet of code. But you mentioned you could also load a container. So what does that give you, do you get event driven capabilities when you load your own container? Can you schedule them? What are your options there?
Michael: Yeah. So we're making a distinction in terms of what is the artifact you are providing to articulate your business logic versus what can you do with it. And we give people the ability to like I said, to provide their container, package whatever they want to into it, and have that as the artifact that articulates the business logic. It can be a function. It can be an HDP serving app like a Node Express app, or it can be a batch job as well. But irregardless of what they are choosing as the input artifact, in the back end it's all treated in the same way. And all the capabilities in terms of scheduled execution event, driven execution, scale to zero, node to node communication. All of those capabilities are available irregardless of what the input artifact is.
Jeremy: Right. I don't have to set up a Kubernetes cluster or anything like that, right? I just go and it's there for me?
Michael: Yeah. I mean, the Kubernetes and serverless they've been in interesting discussions over the last years and they're very opinionated people on both sides. The way we are approaching it is to say, what do people out there want to have? Is there a market for people who want to have something like a Kubernetes cluster, but not the cost and the pain of operating it? And so what they can do as well is, they can use Code Engine as if it was a Kube cluster, but without having to own the Kube cluster.
They get the Kube API, they can deploy something on there. They can run something on there. They can use CTL, get parts and see their parts and so on. They can do all of that, but they don't have to. So we give them an abstraction where they never have to get in touch with any of that. If they choose to run batch jobs or web apps or functions. But if they want to, if they have the expertise, if they need it for provenance determination purposes or whatever else, they can also drop down into that as well.
Jeremy: Which is crazy. All right. So then in terms of users going in and setting this up. So from a pricing standpoint, this is all sort of on demand. It gives you those serverless qualities that you were talking you about?
Michael: Yeah. It's all on-demand. It's granular pricing like on 100 millisecond granularity basis. The pricing is similar to all the other players in the market as well. So people can again, run batch jobs, HGP applications, functions, containers, but the pricing model is the same for all of them.
Jeremy: Right. So then in terms of some of the use cases, you mentioned HDP and batch jobs and some of that stuff. So what are people using it for now? Are they taking advantage of all those capabilities? Are you seeing them sort of breaking those barriers of what you might be constrained with with typical fast offerings?
Michael: Yeah. The EMBL case for example, I mentioned before is one of those use cases. Then we are working with some large enterprises who have workloads for revenue forecasting and things like that. Which they had run so far in a traditional way, which they are now rethinking in terms of how to run it on a serverless platform. And containers, that's a very horizontally applicable, very general purpose kind of thing. So people are using that often. It's the kind of catch-all for everything they cannot get addressed elsewhere.
And then HDP endpoints are really often used as well as a kind of entry level thing, because also that is very horizontal. And then people are starting to specialize in terms of they say, I have this, like I said, this revenue forecasting application, I use the batch capability. Or I have this embarrassingly parallel executable that I want to have executed 100 times in parallel for that batch is also really super useful, because you just specify your command, like on the Linux command line what you want to have executed, you specify as a perimeter. Do you want to have it executed 100 times or 1000 times, and then you just fire it off and it does all the work for you. So it's that broad spectrum of capabilities they're using.
Jeremy: Right. And you're also, depending on I guess, which artifact you use. So if you just upload a function, you're patching the operating system and you're doing all those upgrades and all of the security things. What about if you're using a container, is that something you as a deployment method, do you have to sort of patch some of those things if you're containing runtime in there? Or how does that work?
Michael: Yeah. Excellent question. And that is often missed. It's a sudden discussion point. If somebody wants to provide a container, that person makes a conscious choice of wanting to have control over what is being deployed. So he can do anything within the container, but with that also comes the obligation of them having to patch it if there's something to be patched. If somebody says, "I don't want to have to patch my node runtime, whenever there is a new CBE coming out. I just want to upload my app artifact." They upload the app artifact, and in that case, we know what the artifact is they want to take care of versus what we have to take care of. And when a new CBE comes out for node runtime time, we will be patching the runtime automatically under the cover for that customer.
Jeremy: Awesome. All right. So what about integration? So IBM Cloud has got a lot of different offerings. There's a lot of different things that it can do. But from a serverless perspective, what does the Code Engine ... does that have native integrations to some of the other services in IBM Cloud?
Michael: Yeah. So we have integrations in terms of events sources, as you would expect the usual ones like object storage, Kafka for data streaming, things like that. But we also have specific optimizations built into it, like setting up a really well heightened performing communication between a compute node or a set of compute nodes and object storage can be painful. You need to know which signature you're using. Are you using V4 or V2 depending on how many threads should you should be using? How should the operating system be configured?
So for those parts, we have basically tested and optimized optimizations built into it. So if somebody wants to interact heavily with object storage, which is a super popular case, they can do that as well. What we also have is for example, some of the AI capabilities. What's an assistant where you can build your chatbots and your interactions, and sometimes you need custom logic. You can articulate the custom logic in Cloud Functions. So there are integrations in both sides, both offerings calling us, but also us being able to call others.
Jeremy: Right. So with the containers you can probably use whatever runtime you want, right. If you want it to run node or something else you could do that. But if you're using one of the predefined runtimes, what are some of the runtimes that are supported if I just want to upload code? Because I just want to upload code.
Michael: Yeah. So it's the usual suspect. It's Node, it's Java, it's Python. Python being really popular, increasing in popularity. And it's all the other runtimes as well in various versions like Java, for example. And yeah.
Jeremy: Awesome. All right. So you're clearly breaking some of the maybe, I don't know if I'd call them the cardinal rules of serverless. But you're sort of pushing past what people are like, "Well, these constraints were supposed to be here." And so you're pushing past that which I think is amazing. And I've said for quite some time that Kubernetes it's an amazing technology. All the things that run with it are great, but it's just too complex for most people to manage. And you're not going to get a small startup with three or four people installing Kubernetes clusters and trying to do that.
And so I always looked at Kubernetes as just the open source that will eventually be productized by all the major Clouds so that you don't have to worry about it. And so in this I think Cloud Code Engine is sort of doing that here. But what's the future like? How far are you going to push this? Because what serverless looks like in five years is anybody's guess. With longer runtimes, more memory, less constraints, we're talking about state now, and how much state can be stored and whether or not you can do, again, node to node communication is something that's still relatively new. So where are we going with this? What's IBM's view? And maybe you as being sort of intimately involved here, what's your view? Where is this going in the future?
Michael: That's one of my most interesting topics I'm working on these days as well. So from my perspective, like we talk about serverless, and if I had to project out like five years from now, we would probably be talking not about serverless, but I just have one server and that server has the capacity I need. And it has in terms of CPU, memory and so on. But I think so far we've been treating the cloud as a place to deploy web service, app service, and database service. If you want to articulate it in very broad terms, right? We've been looking at the cloud as a collection of virtual machines and of the different server technologies. We've not been looking at the cloud as a computer in itself.
What if I treat the cloud as a platform as a single computer, which treats like a simple computer from a CLI perspective, from a programming perspective. And I think that's where serverless will be going. We sometimes use this term, "serverless supercomputer," which basically means is I can define a computer of any size that comes within seconds and goes within seconds, and it's fit for purpose. It's customized to exactly the job I want to hand it now. And if I need now for 15 seconds, 1000 cores to do something, I can get that. It feels like a computer that I get for 15 seconds that has 1000 cores and not a collection of containers or VMs or something like that.
So I think from a user perspective mentality point of view, I think that's where we will be seeing lots of trends towards. In combination with programming frameworks that address this on a higher level, like in the Python space there is interesting stuff happening and in other places as well. But I'm very much behind this start of a serverless supercomputer which lets us treat the cloud as a cloud computer and not as a collection of stuff.
Jeremy: Right. Yeah. No, I love that idea because I think about serverless sort of the first iteration of it which was upload a little snippet of code and it just runs, right. And I mean, then it get to be more confusing or more, not confusing, but more complex where you had to start saying, "Okay. Well, I want this event source. I want that event source." And then, oh, there's some failure modes in here and then oh, wait now I need to run things in parallel and then maybe I need to run multiple jobs simultaneously, or I need to compose functions which has always been I guess, a debate within the serverless community.
How do you compose functions? I know it's always straight functions be calling functions and things like that. And of course, there's been a lot of technology advancements that have made that easier, but what we're doing is we're just stitching together a lot of primitives, right? We're saying, this can do this, this can do that. But I just want to run a snippet of code that can buffer events coming into it, but I have to set up a queue and I have to set up the function. I have to do some of these other things. I have to worry about scaling. I have to worry about maybe downstream throttling so that I don't overwhelm the downstream system.
I mean, there's just so much to think about now when you're building distributed applications. So let's go back to the supercomputer for a second though. So what's the vision? How do you envision that working? As a developer, I just want to use like you said, 1000 cores. What's that experience look like to you?
Michael: So the way I'm thinking about this is, we all know, most of us love the Linux command line. It's a command line. It has often been used as the poster child for good practice of how to develop capabilities, small chunks of functionality you can stitch them together. So today you have a CP command, where you copy data from one machine to the other machine, or from one place on the disc to a different place on the disc. How would that CP command look like in a cloud? That CP command would look like in a cloud in a way where you enter CP, you copy data from A to B, but under the cover, maybe 1000 cores gets spun up, or 150 gigabit interfaces get spun up instantaneously.
And they all transfer data from one object storage to another, or from one device to another. They do the chunking of the data behind the scenes. So it's still just a CP comment that you enter on the command line where you have within, just for a few seconds, maybe if it's only needed enormous amounts of network bandwidth available, because it's handling all of that behind the scenes, you have enormous amounts of compute available if you do not own, you have to copy the data, but you want to have an ffmpeg. There could be a super power ffmpeg version. It could be a superpower of pick your favorite executable or Linux command executable.
And then if you do it intelligently, you can also pipe them together. And that is just what I'm talking about. It's just that the command line part of it. The same thought process can be applied to writing Python applications, or writing Java applications, or running batch jobs, or executing your favorite application. You can rethink all of them in a way of, what if I want to make this as usable as something's running just on my laptop, but with superpowers behind the scenes?
Jeremy: Right. Yeah. No. Again, I love that idea. I mean, one of the things I love about the supercomputer idea is just the cost of what that would be. I mean, you can kind of do it now. And you mentioned that example with the research that you can just sort of spin up all this stuff very quickly, do some big job, and then spin it all down and then not pay for it, because I do think that would be really interesting just in terms of what you could find out. What could somebody do from their garage if they had an idea and they wanted to run some sort of model and figure something out?
Rather than spending millions of dollars on virtual machines and paying for those, or even running serverless functions and having to figure out all that stuff just gets really confusing. So I know there's sort of we're not quite at the super computer yet. Although I do think some people are starting to use that or at least take advantage of that. So do you have other customers that are doing interesting things by just doing this massive parallelization?
Michael: Yeah. So there is one customer, I can't use the name. But it's a large enterprise that does revenue projection. So they always forecast for the next 30 days what is their revenue. And they've been running it so far, I think once a day or so, because they couldn't execute it faster. So they needed that one day to execute it and then they had a weekly run. So they moved this over to serverless and it's now almost operating at interactive speed. And I think that is a big part of serverless. So this is getting us now is in general, I think the next big wave is making basically everything that was not interactive so far because of those users constraints interactive.
Give people the ability to do large things, but not run them overnight, but do them right here right now. And that unleashes another degree of productivity, because if you can turn around things, we all know that from our daily development. If your interloop development is super fast and you do a command S and refresh and you run your code again and again, that unleashes enormous productivity. And if we can apply that to other domains as well, like data analytics for example.
I don't have to worry anymore about subsetting my data to a small chunk of data that I can test something on and then I run it on the big data. I can always run it on big data because then I can be sure that I didn't choose the subset of data that's not representative for the bigger amount of data I want to execute on.
Jeremy: Right. Yeah. Again, I think people are already doing this with CI/CD just to build and deploy projects faster, they're using serverless to deploy ... it's pretty fascinating. All right. So one more thing I want to talk to you about before I let you go. And so Cloud Code sorry, Cloud Code Engine gives you all these capabilities. You can do all of this stuff. How does that compare to just Cloud Functions? Are they the same thing?
Michael: Yeah. It's a good question. So Cloud Functions is really from a technology perspective the most competitive one in terms of cold start times, super rapid scaling from zero to 1000 in the shortest possible amount of time. Then we have lots of optimization built in. Lots of optimization in terms of prewarming machines, keeping them around for a longer period of time, caching them and so on and so on.
So Code Engine implements that capability and addresses that segment of the market with Cloud Functions. And with Code Engine we address this bigger space of the market where you have large capacities and all those constraints unleashed. We want to bring them together, so people can use them in combination. So it's not one versus the other, but rather one complementing the other.
Jeremy: Right. Okay. So if I was starting out and I was just starting to build on IBM, would I go with Code Engine or would I go with Cloud Functions? If I wasn't worried too much about the constraints to start let's say, are Cloud Functions just easier to work with? I mean, I'm sure it's all easy to work with, but what would you suggest for a beginner moving over there?
Michael: For a beginner moving over there I would probably start with Code Engine for the simple reason that it's applicable to a broader spectrum of applications. So there is a very wide spectrum of applications that they can serve with this. If they have requirements that make Cloud Functions really well suited in terms of, they have high dependencies against those codes at times, and things like that, then I would go for the Cloud Functions product. But I would suggest as the entry point because it's much broader in terms of adjustable workloads, I would suggest the Code Engine.
Jeremy: Awesome. All right. Actually, I have another question for you because this I'm just. I know we talked about a lot. I know we talked about a lot of customer examples, but I love hearing customer examples. I love hearing use cases. I love to know what people are doing with serverless because I always hear new ones and it just, it opens my mind. So what's your ... You mentioned a couple of them. You talked a little bit about the ETL ones and obviously the big data ones. What's your favorite use case or customer example of people using serverless and IBM?
Michael: My favorite one? There are so many. I think the most recent one and most favorite one, because it fits into this day and age so well, is what the European Molecular Biology Laboratory is doing. Because they are doing research. They're doing medical research. They are looking for new medicine. How they can cure certain diseases in a better way. And I think it just hits the world we're living in today so well. And we can help with technology to accelerate what they're doing. So that's why I think today I would be picking them because they fit so nicely into the world we're living in.
Jeremy: Yeah. No. That's amazing. I've heard so many stories of people using various serverless products and databases and other things to do research on COVID-19, and all these other things is just solving these problems, which I just don't think would be anywhere near as easy or as quick without this technology.
Michael: Yeah. Exactly.
Jeremy: Awesome. All right. So listen Michael, thank you so much for joining me and for the work that you're doing. Again, continuing to move the ball forward on serverless is not an easy task. So I really appreciate that you're on the front lines of that and moving that forward. And thinking about it differently, right. If everybody thinks about it the same way, we're going to just probably repeat the same patterns that we've done in the past. So thinking about it differently is amazing. So I appreciate that. I know others appreciate that. So if people want to get in touch with you and maybe ask you some questions, or they want to find out more about Code Engine and Cloud Functions, how do they do that?
Michael: So they can get in touch with me via Twitter. I think you have the Twitter handle.
Jeremy: Yes. @Micheal_BEH. And I'll put that in the show notes. So we have it.
Michael: Yeah. Excellent. And so if they want to use Code Engine, they go to cloud.ibm.com/codeengine and they can try it out. It's in beta. Watch this space, it will be evolving quickly. So I'm looking forward to any kind of feedback people are having. And reach out to me on any of those topics we talked about. I'm interested in what people out there are thinking, and we can maybe keep the dialogue going also asynchronously.
Jeremy: Right. Yeah. No, I mean, getting input from people. That feedback is going to be super important as this whole thing grows. So all right. So Twitter, I'll put your LinkedIn in the show notes as well. So I have ibm.biz/codeengine or cloud.ibm.com/functions for Cloud Functions. We'll get that in the show notes. Michael, thanks again.
Michael: Thanks for having me. Really enjoyed it.
This episode is sponsored by
IBM Cloud.