Show Notes
Some highlights of the show include:
- The company’s cloud native journey, which accelerated with the acquisition of Uswitch.
- How the company assessed risk prior to their migration, and why they ultimately decided the task was worth the gamble.
- Uswitch’s transformation into a profitable company resulting from their cloud native migration.
- The role that multidisciplinary, collaborative teams played in solving problems and moving projects forward. Paul also offers commentary on some of the tensions that resulted between different teams.
- Key influencing factors that caused the company to adopt containerization and Kubernetes. Paul goes into detail about their migration to Kubernetes, and the problems that it addressed.
- Paul’s thoughts on management and prioritization as CTO. He also explains his favorite engineering tool, which may come as a surprise.
Links:
Transcript
Announcer: Welcome to The Business of Cloud Native podcast, where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.
Emily: Welcome to The Business of Cloud Native. I'm your host, Emily Omier, and today I am chatting with Paul Ingles. Paul, thank you so much for joining me.
Paul: Thank you for having me.
Emily: Could you just introduce yourself: where do you work? What do you do? And include, sort of, some specifics. We all have a job title, but it doesn't always reflect what our actual day-to-day is.
Paul: I am the CTO at a company called
RVU in London. We run a couple of reasonably big-ish price comparison, aggregator type sites. So, we help consumers figure out and compare prices on broadband products, mobile phones, energy—so in the UK, energy is something which is provided through a bunch of different private companies, so you've got a fair amount of choice on kind of that thing. So, we tried to make it easier and simpler for people to make better decisions on the household choices that they have. I've been there for about 10 years, so I've had a few different roles. So, as CTO now, I sit on the exec team and try to help inform the business and technology strategy. But I've come through a bunch of teams. So, I've worked on some of the early energy price comparison stuff, some data infrastructure work a while ago, and then some underlying DevOps type automation and Kubernetes work a couple of years ago.
Emily: So, when you get in to work in the morning, what types of things are usually on your plate?
Paul: So, I keep a journal. I use bullet journalling quite extensively. So, I try to track everything that I’ve got to keep on top of. Generally, what I would try to do each day is catch up with anybody that I specifically need to follow up with. So, at the start of the week, I make a list of every day, and then I also keep a separate column for just general priorities.
So, things that are particularly important for the week, themes of work going on, like, technology changes, or things that we're trying to launch, et cetera. And then I will prioritize speaking to people based on those things. So, I'll try and make sure that I'm focusing on the most important thing. I do a weekly meeting with the team. So, we have a few directors that look after different aspects of the business, and so we do a weekly meeting to just run through everything that's going on and sharing the problems. We use the three P's model: so, sharing progress problems and plans. And we use that to try and steer on what we do. And we also look at some other team health metrics.
Yeah, it's interesting actually. I think when I switched from working in one of the teams to being in the CTO role, things change quite substantially. That list of things that I had to care about increase hugely, to the point where it far exceeded how much time I had to spend on anything. So, nowadays, I find that I'm much more likely for some things to drop off. And so it's unfortunate, and you can't please everybody, so you just have to say, “I'm really sorry, but this thing is not high on the list of priorities, so I can't spend any time on it this week, but if it's still a problem in a couple of weeks time, then we'll come back to it.” But yeah, it can vary quite a lot.
Emily: Hmm, interesting. I might ask you more questions about that later. For now, let's sort of dive into the cloud-native journey. What made RVU decide that containerization was a good idea and that Kubernetes was a good idea? What were the motivations and who was pushing for it?
Paul: That's a really good question. So, I got involved about 10 years ago. So, I worked for a search marketing startup that was in London called Forward Internet Group, and they acquired
USwitch in 2010. And prior to working at Forward, I'd worked as a consultant at ThoughtWorks in London, so I spent a lot of time working in banks on continuous delivery and things like that. And so when Uswitch came along, there were a few issues around the software release process. Although there was a ton of automation, it was still quite slow to actually get releases out. We were only doing a release every fortnight. And we also had a few issues with the scalability of data.
So, it was a monolithic Windows Microsoft stack. So, there was SQL Server databases, and .NET app servers, and things like that. And our traffic can be quite spiky, so when companies are in the news, or there's policy changes and things like that, we would suddenly get an increase in traffic, and the Microsoft solution would just generally kind of fall apart as soon as we hit some kind of threshold. So, I got involved, partly to try and improve some of the automation and release practices because at the search start-up, we were releasing experiments every couple of hours, even.
And so we wanted to try and take a bit of that ethos over to Uswitch, and also to try and solve some of the data scalability and system scalability problems. And when we got started doing that, a lot of it was—so that was in the early heyday of AWS, so this was about 2008, that I was at the search startup. And we were used to using EC2 to try and spin up Hadoop clusters and a few other bits and pieces that we were playing around with. And when we acquired Uswitch, we felt like it was quickest for us to just create a different environment, stick it under the load balancer so end users wouldn't realize that some requests was being served off of the AWS infrastructure instead, and then just gradually go from there. We found that that was just the fastest way to move.
So, I think it was interesting, and it was both a deliberate move, but it was also I think the degree to which we followed through on it, I don't think we'd really anticipated quite how quickly we would shift everything. And so when Forward made the acquisition, I joined summer of 2010, and myself and a colleague wrote a little two-pager on, here are the problems we see, here are the things that we think we can help with and the ways that technology approach that we'd applied at Forward would carry across, and what benefits we thought it would bring. Unfortunately because Forward was a privately held business—we were relatively small but profitable—and the owner of that business was quite risk-affine. He was quite keen on playing blackjack and other stuff. So, he was pretty happy with talking about probabilities of success.
And so we just said, we think there's a future in it if we can get the wheels turning a bit better. And he was up for it. He backed us and we just took it from there. And so we replaced everything from self-hosted physical infrastructure running on top of .NET to all AWS hosted, running a mix of Ruby, and Closure, and other bits and pieces in about two years. And that's just continued from there. So, the move to Kubernetes was a relatively recent one; that was only within the last—I say ‘recent.’ it was about two years ago, we started moving things in earnest. And then you asked what was the rationale for switching to Kubernetes—
Emily: Let me first ask you, when you were talking with the owner, what were the odds that you gave him for success?
Paul: [laughs]. That's a good question. I actually don't know. I think we always knew that there was a big impact to be had. I don't think we knew the scale of the upside. So, I don't think we—I mean, at the time, Uswitch was just about breaking even, so we didn't realize that there was an opportunity to radically change that. I think we underestimated how long it would take to do.
So, I think we’d originally thought that we could replace, I think maybe most of the stuff that we needed replaced within six months. We had an early prototype out within two weeks, two or three weeks because we'd always placed a big emphasis on releasing early, experimenting, iterative delivery, A/B testing, that kind of thing. So, I think it was almost like that middle term that was the harder piece. And there was definitely a point where… I don't know, I think it was this classic situation of pulling on a ball of string where it was like, what wanted to do was to focus on improving the end-user experience because our original belief was that, aside from the scalability issues, that the existing site just didn't solve the problem sufficiently well, that it needed an overhaul to simplify the journeys, and simplify the process, and improve the experience for people.
We were focusing on that and we didn't want to get drawn into replacing a lot of the back office and integration type systems partly because there was a lot of complexity there. But also because you then have to engage with QA environments, and test environments, and sign-offs with the various people that we integrate with. But it was, as I said, it was this kind of tugging on a ball of string where every improvement that we made in the end-user experience—so we would increase conversion rate by 10 percent but through doing that, we would introduce downstream error in the ways that those systems would integrate, and so we gradually just ended up having to pull in slightly more and more pieces to make it work. I don't think we ever gave odds of success. I think we underestimated how long that middle piece would take. I don't think we really anticipated the degree of upside that we would get as a consequence, through nothing other than just making releases quicker, being able to test and move faster, and focusing on end-user experience was definitely the right thing to focus on.
Emily: Do you think though, that everybody perceived it as a risk? I'm just asking because you mentioned the blackjack, was this a risk that could fail?
Paul: Well, I think the interesting thing about it was that we knew it was the right thing to do. So, again, I think our experience as consultants at ThoughtWorks was on applying continuous delivery, what we would today call DevOps, applying those practices to software delivery. And so we'd worked on systems where there weren't continuous integration servers and where people weren't releasing every day, and then we’d worked in environments where we were releasing every couple of hours, and we were very quickly able to hone in on what worked and discard things that didn't. And so I think because we've been able to demonstrate that success within the search business, I think that carried a great deal of trust.
And so when it came to talking about things we could potentially do, we were totally convinced that there were things that we could improve. I think it was a combination of, there was a ton of potential, we knew that there was a new confluence of technologies and approaches that could be successful if we were able to just start over. And then I think also probably a healthy degree of, like, naive, probably overconfidence in what we could do that we would just throw ourselves into it. So, it's hard work, but yeah, it was ultimately highly successful. So, it's something I'm exceedingly proud of today.
Emily: You said something really interesting, which is that Uswitch was barely profitable. And if I understand correctly, that changed for the better. Can you talk about how this is related?
Paul: Yeah, sure. I think the interesting thing about it was that we knew that there was something we could do better, but we weren't sure what it was. And so the focus was always on being able to release as frequently as we possibly could to try and understand what that was, as well as trying to just simplify and pay back some of the technical debt. Well, so, trying to overcome some of the artificial constraints that existed because of the technology choices that people have made—perfectly decent decisions on, back in the day, but platforms like AWS offered better alternatives, now. So, we just focused on being able to deliver iteratively, and just keep focusing on continual improvement, releasing, understanding what the problems were, and then getting rid of those little niggly things.
The manager I had at Forward was this super—I don't know, he just had the perfect ethos, and he was driven—so we were a team that were focused on doing daily experiments. And so we would rely on data on our spend and data on our revenue. And that would come in on a daily cycle. So, a lot of the rhythm of the team was driven off of that cycle. And so as we could run experiments and measure their profitability, we could then inform what we would do on the day.
And so, we have a handful of long-running technology things that we were doing, and then we would also have other tactical things that he would have ideas on, he would have some hypothesis of, well, “Maybe this is the reason that this is happening, let's come up with a test that we can use to try and figure out whether that's true.” We would build something quickly to throw it together to help us either disprove it or support it, and we would put it live, see what happened, and then move on to the next thing. And so I think a lot of the—what we wanted to do is to instill a bit of that environment in Uswitch. And so a lot of it was being able to release quickly, making sure that people had good data in front of them. I mean, even tools like Google Analytics were something which we were quite au fait with using but didn't have broad adoption at the time. And so we were using that to look at site behavior and what was going on and reason about what was happening. So, we just tried to make sure that people were directly using that, rather than just making changes on a longer cycle without data at all.
Emily: And can you describe how you were working with the business side, and how you were communicating, what the sort of working relationship was like? If there was any misunderstandings on either side.
Paul: Yeah, it’s a good question. So, when I started at Uswitch, the organizational structure was, I guess, relatively classical. So, you had a pooled engineering team. So, it was a monolithic system, deployed onto physical infrastructure. So, there was an engineering team, there was an operations team, and then there were a handful of people that were business specific in the different markets that we operated in. So, there was a couple of people that focused on, like, the credit card market; a couple of people that focused on energy, for example.
And, I used to call it the stand-up swarm: so, in the morning, we would sit on our desks and you would see almost the entire office moved from the different card walls that were based around the office. Although there was a high degree of interaction between the business stakeholders, the engineers, designers, and other people, it always felt slightly weird that you would have almost all of the company interested in almost everything that was going on, and so I think the intuition we had was that a lot of the ways that we would think about structuring software around loosely-coupled but highly cohesive, those same principles should or could apply to the organization itself. And so what we tried to do is to make sure that we had multidisciplinary teams that had the people in them to do the work. So, for the early days of the energy work, there was only a couple of us that were in it.
So, we had a couple of engineers, and we had a lady called Emma, who was the product owner. She used to work in the production operations team, so she used to be focused on data entry from the products that different energy providers would send us, but she had the strongest insight into the domain problem, what problem consumers were trying to overcome, and what ways that we could react to it. And so, when we got involved, she had a couple of ideas that she'd been trying to get traction on, that she'd been unable to. And so what we—we had a, I don't know, probably a, I think a half-day session in an office. So, we took over the boardroom at the office and just said, “Look, we could really do with a separate space away from everybody to be able to focus on it. And we just want to prove something out for a couple of weeks. And we want to make sure that we've got space for people to focus.”
And so we had a half-day in there, we had a conversation about, “Okay, well, what's the problem? What's the technical complexity of going after any of these things?” And there's a few nuances, too. Like, if you choose option A, then we have to get all of the historical information around it, as well as the current products and market. Whereas if we choose option B, then we can simplify it down, and we don't need to do all of that work, and we can try and experiment with something sooner.
So, we wanted it to be as collaborative as possible because we knew that the way that we would be successful is by trying to execute on ideas faster than we’d been able to before. And at the same time, we also wanted to make sure that there was a feeling of momentum and that we would—I think there was probably a healthy degree of slight overconfidence, but we were also very keen to be able to show off what we could do. And so we genuinely wanted to try and improve the environment for people so that we could focus on solving problems quicker, trying out more experiments, being less hung up on whether it was absolutely the right thing to do, and instead just focus on testing it. So, were there tensions? I think there were definitely tensions; I don't think there weren’t tensions so much on the technical side; we were very lucky that most of the engineers that already worked there were quite keen on doing something different, and so we would have conversations with them and just say, “Look, we'll try everything we can to try and remove as many of the constraints that exist today.”
I think a lot of the disagreement or tension was whether or not it was the right problem to be going after. So, again, the search business that we worked in was doing a decent amount of money for the number of people that were there, and we knew there was a problem we could fix, but we didn't know how much runway it would have. And so there was a lot of tension on whether we should be pulling people into focusing on extending the search business, or whether we needed to focus on fixing Uswitch. So, there was a fair amount of back and forth about whether or not we needed to move people from one part of the business to another and that kind of thing.
Emily: Let's talk a little bit about Kubernetes, and how Uswitch decided to use Kubernetes, what problem it solved, and who was behind the decision, who was really making the push.
Paul: Yeah, interesting. So, I think containers was something that we'd been experimenting with for a little while. So, as I think a lot of the culture was, we were quite risk-affine. So, we were quite keen to be trying out new technologies, and we'd been using modern languages and platforms like Closure since the early days of them being available. We’d been playing around with containers for a while, and I think we knew there was something in it, but we weren't quite sure what it was.
So, I think, although we were playing around with it quite early, I think we were quite slow to choose one platform or another. I think in the end, we—in the intervening period, I guess, between when we went from the more classical way of running Puppet across a bunch of EC2 instances that run a version of your application, the next step after that was switching over to using ECS. So, Amazon's container service. And I guess the thing that prompted a bit more curiosity into Kubernetes was that—I forgot the projects I was working on, but I was working on a team for a little while, and then I switched to go do something else. And I needed to put a new service up, and rather than just doing the thing that I knew, I thought, “Well, I'll go talk to the other teams.” I'll talk to some other people around the company, and find out what's the way that I ought to be doing this today, and there was a lot of work around standardizing the way that you would stand up an ECS cluster.
But I think even then, it always felt like we were sharing things in the wrong way. So, if you were working on a team, you had to understand a great deal of Amazon to be able to make progress. And so, back when I got started at Uswitch, when I talk about doing the work about the energy migration, AWS at the time really only offered EC2, load balancers, firewalling, and then eventually relational databases, and so back then the amounts of complexity to stand up something was relatively small. And then come to a couple of years ago. You have to appreciate and understand routing tables, VPCs, the security rules that would permit traffic to flow between those, it was one of those—it was just relatively non-trivial to do something that was so core to what we needed to be able to do.
And I think the thing that prompted Kubernetes was that, on the Kubernetes project side, we'd seen a gradual growth and evolution of the concepts, and abstractions, and APIs that it offered. And so there was a differentiation between ECS or—I actually forget what CoreOS’s equivalent was. I think maybe it was just called CoreOS. But there are a few alternative offerings for running containerized, clustered services, and Kubernetes seems to take a slightly different approach that it was more focused on end-user abstractions. So, you had a notion of making a deployment: that would contain replicas of a container, and you would run multiple instances of your application, and then that would become a service, and you could then expose that via Ingress. So, there was a language that you could use to talk about your application and your system that was available to you in the environment that you're actually using.
Whereas AWS, I think, would take the view that, “Well, we've already got these building blocks, so what we want our users to do is assemble the building blocks that already exist.” So, you still have to understand load balancers, you still have to understand security groups, you have to understand a great deal more at a slightly lower level of abstraction. And I think the thing that seemed exciting, or that seems—the potential about Kubernetes was that if we chose something that offered better concepts, then you could reasonably have a team that would run some kind of underlying platform, and then have teams build upon that platform without having to understand a great deal about what was going on inside. They could focus more on the applications and the systems that they were hoping to build. And that would be slightly harder on the alternative.
So, I think at the time, again, it was one of those fortunate things where I was just coming to the end of another project and was in the fortunate position where I was just looking around at the various different things that we were doing as a business, and what opportunity there was to do something that would help push things on. And Kubernetes was one of those things which a couple of us had been talking about, and thinking, “Oh, maybe now is the time to give it a go. There's enough stability and maturity in it; we're starting to hit the problems that it's designed to address. Maybe there's a bit more appetite to do something different.”
So, I think we just gave it a go. Built a proof of concept, showed that could run the most complex system that we had, and I think also did a couple of early experiments on the ways in which Kubernetes had support for horizontal scaling and other things which were slightly harder to put into practice in AWS. And so we did all that, I think gradually it just kind of growed out from there, just took the proof of concepts to other teams that were building products and services. We found a team that were struggling to keep their systems running because they were a tiny team. They only had, like, two or three engineers in. They had some stability problems over a weekend because the server ran out of hard disk space, and we just said, “Right. Well, look, if you use this, we'll take on that problem. You can just focus on the application.” It kind of just grew and grew from there.
Emily: Was there anything that was a lot harder than you expected? So, I'm looking for surprises as you're adopting Kubernetes.
Paul: Oh, surprises. I think there was a non-trivial amount that we had to learn about running it. And again, I think at the point at which we'd picked it up, it was, kind of, early days for automation, so there was—I think maybe Google had just launched Google Kubernetes engine on Google Cloud. Amazon certainly hadn't even announced that hosted Kubernetes would be an option. There was an early project within Kubernetes, called kops that you could use to create a cluster, but even then it didn't fit our network topology because it wouldn't work with the VPC networking that we needed and expected within our production infrastructure.
So, there was a lot of that kind of work in the early days, to try and make something work, you had to understand in quite a level of detail what each component of Kubernetes was doing. As we were gradually rolling it out, I think the things that were most surprising were that, for a lot of people, it solved a lot of problems that meant they could move on, and I think people were actually slightly surprised by that. Which, [laughs], it sounds like quite a weird turn of phrase, but I think people were positively surprised at the amount of stuff that they didn't have to do for solving a fair few number of problems that they had. There was a couple of teams that were doing things that are slightly larger scale that we had to spend a bit more time on improving the performance of our setup. So, in particular, there was a team that had a reasonably strong requirement on the latency overheads of Ingress.
So, they wanted their application to respond within, I don't know, I think it was maybe 200 milliseconds or something. And we, through setting up the monitoring and other bits and pieces that we had, we realized that Ingress currently was doing all right, but there was a fair amount of additional latency that was added at the tail that was a consequence of a couple of bugs or other things that existed in the infrastructure. So, there was definitely a lot of little niggly things that came up as we were going, but we were always confident that we could overcome it. And, as I said, I think that a lot of teams saw benefits very early on. And I think the other teams that were perhaps a little bit more skeptical because they got their own infrastructure already, they knew how to operate it, it was highly tested, they'd already run capacity and load tests on it, they were convinced that it was the most efficient thing that they could possibly run. I think even over the long run, I think they realized that there was more work that they needed to do than they should be focusing on, and so they were quite happy to ultimately switch over to the shared platform and infrastructure that the cloud infrastructure team run.
Emily: As we wrap up, there's actually a question I want to go back to, which is how you were talking about the shifting priorities now that you've become CTO. Do you have any sort of examples of, like, what are the top three things that you will always care about, that you will always have the energy to think about? And then I'm curious to have some examples of things that you can't deal with, you can’t think about. The things that tend to drop off.
Paul: The top three things that I always think about. So, I think, actually, what's interesting about being CTO, that I perhaps wasn't expecting is that you're ever so slightly removed from the work, that you can't rely on the same signals or information to be able to make a decision on things, and so when I give the Kubernetes story, it's one of those, like, because I’d moved from one system to another, and I was starting a new project, I experienced some pain. It’s like, “Right. Okay, I've got to go do something to fix this. I've had enough.”
And I think the thing that I'm always paying attention to now, is trying to understand where that pain is next, and trying to make sure that I've got a mechanism for being able to appreciate that. So, I think a lot of the things I try to spend time on are things to help me keep track of what's going on, and then help me make decisions off the back of it. So, I think the things that I always spend time on are generally things trying to optimize some process or invest in automation. So, a good example at the moment is, we're talking about starting to do canary deployments. So, starting to automate the actual rollout of some new release, and being able to automate a comparison against the existing service, looking at latency, or some kind of transactional metrics to understand whether it's performing as well or different than something historical.
So, I think the things that I tend to spend time on are process-oriented or are things to try and help us go quicker. One of the books that I read that changed my opinion of management was Andy Grove’s, High Output Management. And I forget who recommended it to me, but somebody recommended it to me, and it completely altered my opinion of what value a manager can add. So, one of the lenses I try to apply to anything is of everything that's going on, what's the handful of things that are going to have the most impact or leverage across the organization, and try and spend my time on those. I think where it gets tricky is that you have to go broad and deep. So, as much as there are broad things that have a high consequence on the organization as a whole, you also need an appreciation of what's going on in the detail, and I think that's always tricky to manage. I'm sorry, I forgot what was the second part of your question.
Emily: The second part was, do you have any examples of the things that you tend to not care about? That presumably someone is asking you to care about, and you don't?
Paul: [laughs]. Yeah, it’s a good question. I don't think it's that I don't care about it. I think it's that there are some questions that come my way that I know that I can defer, or they're things which are easy to hand off. So, I think the… that is a good question. I think one of the things that I think are always tricky to prioritize, are things which feel high consequence but are potentially also very close to bikeshedding.
And I think that is something which is fair—I'd be interested to hear what other people said. So, a good example is, like, choice of tooling. And so when I was working on a team, or on a problem, we would focus on choosing the right tool for the job, and we would bias towards experimenting with tools early, and figuring out what worked, and I think now you have to view the same thing through a different lens. So, there's a degree to which you also incur an organizational cost as a consequence of having high variability in the programming languages that you choose to use. And so I don't think it's something I don't care about, but I think it's something which is interesting that I think it's something which, over the time I've been doing this role, I've gradually learned to let go of things that I would otherwise have previously thoroughly enjoyed getting involved in.
And so you have to step back and say, “Well, actually I'm not the right person to be making a decision about which technology this team should be using. I should be trusting the team to make that decision.” And you have to kind of—I think that over the time I've been doing the role, you kind of learn which are the decisions that are high consequence that you should be involved in and which are the ones that you have to step back from. And you just have to say, look, I've got two hours of unblocked time this week where I can focus on something, so of the things on my priority list—the things that I've written in my journal that I want to get done this month—which of those things am I going to focus on, and which of the other things can I leave other people to get on with, and trust that things will work out all right?
Emily: That's actually a very good segue into my final question, which is the same for everyone. And that is, what is an engineering tool that you can't live without—your favorite?
Paul: Oh, that’s a good question. So, I don't know if this is a cop-out by not mentioning something engineering-related, but I think the tool and technique which has helped me the most as I had more and more management responsibility and trying to keep track of things, is bullet journaling. So, I think, up until, I don’t know, maybe five years ago, probably, I'd focus on using either iOS apps or note tools in both my laptop, and phone, and so on, and it never really stuck. And bullet journaling, through using a pen and a notepad, it forced me to go a bit slower. So, it forced me to write things down, to think through what was going on, and there is something about it being physical which makes me treat it slightly differently.
So, I think bullet journaling is one of the things which has had the—yeah, it's really helped me deal with keeping track of what's going on, and then giving me the ability to then look back over the week, figure out what were the things that frustrated me, what can I change going into next week, one of the suggestions that the person that came up with bullet journaling recommended, is this idea of an end of week reflection. And so, one of the things I try to do—it's been harder doing it now that I'm working at home—is to spend just 15 minutes at the end of the week thinking of, what are the things that I'm really proud of? What are some good achievements that I should feel really good about going into next week? And so I think a lot of the activities that stem from bullet journaling have been really helpful. Yeah, it feels like a bit of a cop-out because it's not specifically technology related, but bullet journaling is something which has made a big difference to me.
Emily: Not at all. That's totally fair. I think you are the first person who's had a completely non-technological answer, but I think I've had someone answer Slack, something along those lines.
Paul: Yeah, I think what's interesting is there there are loads of those tools that we use all the time. Like Google Docs is something I can't live without. So, I think there's a ton of things that I use day-to-day that are hard to let go off, but I think the I think that the things that have made the most impact on my ability to deal with a stressful job, and give you the ability to manage yourself a little bit, I think yeah, it's been one of the most interesting things I've done.
Emily: And where could listeners connect with you or follow you?
Paul: Cool. So, I am
@pingles on Twitter. My DMs are open, so if anybody wants to talk on that, I'm happy to. I’m also on GitHub under
pingles, as well. So, @pingles, generally in most places will get you to me.
Emily: Well, thank you so much for joining me.
Paul: Thank you for talking. It's been good fun.
Announcer: Thank you for listening to
The Business of Cloud Native podcast. Keep up with the latest on the podcast at
thebusinessofcloudnative.com and subscribe on iTunes, Spotify, Google Podcasts, or wherever fine podcasts are distributed. We'll see you next time.