About the Guest
Cory G Watson is a Technical Director at SignalFx with 20 years of SWE, SRE and leadership experience at the likes of Twitter and Stripe. He's an observability wonk, optimist, and lover of sneakers. He hopes you're having a great day!
Mike Julian: This is the Real World DevOps podcast. I'm your host, Mike Julian. I'm setting out to meet the most interesting people doing awesome work in the world of DevOps. From the creators of your favorite tools, to the organizers of amazing conferences, from the authors of great books to fantastic public speakers, I want to introduce you to the most interesting people I can find.
Mike Julian: Ah, crash reporting, the oft forgotten about piece of a solid monitoring strategy. If you struggle to replicate bugs or elusive performance issues you're hearing about from your users, you should check out Raygun. Whether you're responsible for web or mobile applications, Raygun makes it pretty easy to find and diagnose problems in minutes instead of what you usually do which, if you're anything like me, is ask the nearest person, "Hey, is the app slow for you?" and getting a blank stare back because, "Hey, this is Starbucks, and who's the weird guy asking questions about mobile app performance?" Anyways, it's Raygun. My personal thanks to them for helping to make this podcast possible. You can check out their free trial today by going to raygun.com
Mike Julian: Hi folks. I'm Mike Julian, your host for the Real World DevOps podcast. My guest this week is Cory Watson, Technical Director for the Office of the CTO at SignalFX. He's previously run observability at Stripe and Twitter. Cory, welcome to the show.
Cory Watson: Thanks for having me.
Mike Julian: I think it's interesting that you have gone from running observability teams at pretty interesting places like Stripe and Twitter.
Cory Watson: Thank you, thank you.
Mike Julian: To now, you're working for the enemy.
Cory Watson: That's a good way to put it.
Mike Julian: Yeah, you have suddenly gone to the vendor side.
Cory Watson: Yeah, the nicer way that I put it, is I say I switched sides of the table.
Mike Julian: Ah, yes. That is a much better way to put it. Apologies to my sponsors for insinuating that they're the enemy. You're completely right, it really is just the other side of the table.
Cory Watson: That implies that it's just a simple change in aspect, though. It's really not. It's actually a pretty fascinating difference, I think.
Mike Julian: Well, why don't you tell us more about that? What is that difference? What is it all about?
Cory Watson: I think, in the past, when you work at a place that uses these vendors, you're often trying to make sure you maintain, or at least it's always been my goal to maintain, a sort of neutrality or maybe not a shim layer is the right way to put it. But how do we retain our independence and make sure that we could switch, if necessary, and all these other things? It's both good, usually from a technical perspective, but also from a leverage perspective, right? Because you wanna be able to switch if you need to or go to the new, cool thing that might come out. Not that we jump and switch that frequently, but you wanna be able to retain that independence. Now, suddenly, I'm on the opposite side. I think there's two interesting bits about it. One is the change in perception or the change in the approach. The second is just the learning experience that I have from watching business being conducted from this side of the table. I guess we can start on the difference in perspective. To your earlier question, it's like, alright. Here I am previously sitting over here, going like, "Okay, don't tell them anything and pretend like you hate everything that they show you, and never show your true colors."
Mike Julian: Fantastic negotiating tactics.
Cory Watson: For lack of any better training, I think that it's the place that you try to go. But at the same time, it's actually somewhat different than I imagined, because I'm happy to work for a company that largely isn't trying to sell you something for the sake of selling it to you. I think this is true of pretty much all vendors, we only want you, as a customer, if you're ultimately going to be happy. Especially for the duration that many of these engagements, contracts, or purchase periods, or whatever last, you don't just want to ease in, make a buck, and then ease back out again. You've got to stay with it.
Cory Watson: In many ways, I think that my past experience of wanting to hold everything close to the vest, to use that idiom, don't really work that well because you need to give as much information as you can to the vendor so that the vendor can, hopefully, tailor the solution as well as possible, right? At the same time, I think it all comes down to price, though, at the end of the day. I think in that, luckily I don't have to have that conversation. I am only here to talk about the pros and cons and the approaches of how to do observability and how SignalFX can be helpful for you. I actually think that in switching sides, I've actually seen how it can actually hinder the process of adoption and understanding. Because, if you're like, "Well, we're not gonna tell you how many things we're monitoring, or how many metrics there are, or what our challenges are." It makes it extremely difficult to articulate a value proposition to the other people because suddenly I'm like, "If I can't help you size this, I don't know how to have the conversation." It's interesting to be on the side where you suddenly lack so much information because-
Mike Julian: I've run into that in the Amazon world, where people will say, "Oh, we're spending a ton of money with Amazon, but we don't want to tell them what our future plans are. We don't wanna tell them about our product roadmap because security reasons. What if they leak it, what if they try to do it themselves?" On some level, like it's AWS so maybe it will actually happen.
Cory Watson: Yeah, as we saw last week.
Mike Julian: Right. When you're spending that much money with a company, when a company is such a core aspect of how you do business, for example on monitoring product, it's no longer a vendor. They're a partner.
Cory Watson: Yeah, that's an excellent way to put it. I love that phrasing because I felt that way even as a customer. I've been a customer of many companies in this context, in the monitoring, observability, whatever context system stuff, and it's true. Once that spin gets to a certain amount or when it's such a critical part of your infrastructure, these systems are effectively the highest criticality or whatever of your internal systems. Because if you can't see what's happening, you can't make changes. You can't run if you can't see. It's absolutely right. It needs to be a partnership. The more information you can give, the better. Once everybody gets under a mutual NDA things, I think loosen up a bit. It's easier to share. It's also understandable because things like the number of hosts you run and the magnitude that some companies operate are sensitive subjects. I think it's very reasonable to hold it close. But at the same time, the better information you can give, then the better the solution can be tailored. So yeah, that's one side of the difference from switching over to being a vendor. Luckily, I think, on the other side of what I've learned ... actually, I shouldn't use "side" because that's confusing.
Cory Watson: So there's that aspect, but then the learning experience is pretty interesting for me too. How sales organizations structure and work through this stuff? I'm not in sales. I've always worked in infrastructure at companies, be it for observability things or as an SRE. Suddenly now, I'm faced with learning how they approach it. Recognizing who at the customer that you're working with, who's your champion there? Just like anything, you need a champion. Someone who’s going to help. It's not strictly adversarial, but at the same time, the terminology is often like, "Well, who are the people who are fighting this? What are their motivations? Who are the people who are championing? What are their motivations?"
Cory Watson: It's interesting because I've always used LinkedIn, mostly as a tool to stay connected with people I used to work with. Nowadays, I don't see this because I don't do it, but the sales people do. They know who everybody in the org is because LinkedIn is the org-chart. It's like, "Well, who do they answer to? Who's their boss? Who's gonna be in this meeting? What are their titles? Who's got the purse strings in this conversation?" It's all stuff that I look back and I see. I remember salespeople asking me these sorts of questions. I was always like, "Harrumph, why are you bugging me with these questions?" Only to realize that they're trying to figure out how to position themselves to best answer the questions that are out there, and also to sort of understand is this going to be fruitful, because companies can waste a lot of time if they're talking to the wrong people or all this other stuff. It's been really fascinating. Some of this was intuitive to me, just in working with small companies. It's just been fascinating to be on this side of the table, as we've been describing it, and learn how they approach this problem because I know how to approach systems problems. But these are essentially people-problems which is, at least in this context, all new to me.
Mike Julian: Yeah, absolutely. I can totally echo everything you're saying on that as a consultant myself. I do also go through LinkedIn and start mapping org charts. Definitely been there. What's interesting to me about the sales conversations, is once you stop looking at vendor sales as adversaries, as someone trying to sell you a thing, where you get this idea of, "They're trying to sell you a thing whether you need it or not." Their trying to foist a thing on you. They're trying to trick you into signing a contract. That's not actually how any of it's going because most salespeople are not measured just strictly on how much money they bring in, but also on retention.
Cory Watson: Yeah, yeah. It's a good point. It's about that year over year, over year. No one wants to sign a contract. I often saw companies saying, "Oh, we want you to sign these long contracts." I thought of it in purely dollar magnitude. I never thought of it as the comfort of that relationship being there. It's reasonable that when you sign up with a new company, you don't want immediately get into some three-year deal or something. But, at the same time, knowing that's gonna be there helps every side of the equation. I used to think about the contracts we were signing in raw margin terms. Well, I know what it costs to buy a server. Put it in a rack and run it. But you don't think about the engineering organization that's built up to also deal with all the silly stuff I'm asking for like the 100 different features I've got in a list, making sure that those are all getting done. I'm not the only customer. There's so much that goes on behind the scenes. I don't know. I feel privileged every day to be able to be able to see it from this side while still leveraging the fact that I'm an observability wonk and I do this stuff every day. I still get to leverage my strengths, but also shoring up my weaknesses when it comes to the sales side of the table. It's been a lot of fun.
Mike Julian: Yeah, learning the business side of things — I think it's been the most interesting aspect of my entire career. Basically for me, the past five years learning how business is done, especially for selling infrastructure services and infrastructure products. To me, I think it's also the most impactful thing I've ever done in my career. All the knowledge and skills I've gained over the entire career of doing monitoring and observability and infrastructure. Yeah, it's all great. The things that are really made the most difference for me was learning how the business functions.
Cory Watson: I don't think I've thought about it that way until listening to that explanation. I think if I rewind a little bit, the reason I took the job was I felt like I could have more of an effect on our industry by helping people connect these dots and leverage a vendor, if it was the right vendor for them, to get this job done instead of developing some of this stuff perhaps in-house. I mean not that you shouldn't necessarily do that, but there's a trade-off. Now, basically, I have shorter conversations leveraging my past experience to your exact point, to help them make this decision. Then hopefully have a significant like an outsized impact. The conversations are almost small compared to the impact that they have. Whereas, my engineering impact, is so much more long-term everyday typing and drudgery than going and spending two hours meeting a customer and having these conversations directly.
Mike Julian: Yeah, absolutely. When you're working on the vendor side, you do have the ability to be a much larger multiplier. When you're working inside of a company, the effect you have is pretty limited. If you want to affect an entire industry, it's possible but it's hard. Especially if your company is not a vendor, if you're at a vendor, and that vendor is also sizable and doing interesting things, then you become a multiplier.
Cory Watson: An interesting connection to my past life, a fellow by the name of Patrick McKenzie, who works at Stripe, goes by @patio11
on Twitter, recently wrote a blog post
about why he joined Stripe. Part of it was, even though he had previously worked in trying to help small company succeed, it was because working for that vendor, in this case, gave him an outsized impact on all of those companies. He's probably much more articulate about it than I am. I just read it yesterday. It was like, "Yeah, man, I believe that. That's what I'm trying to do."
Mike Julian: I will find that post and put in the show notes because I'm a big fan of Patrick.
Cory Watson: Yeah, he's pretty good dude.
Mike Julian: Transitioning a little bit, you've had a background running observability teams. Now, you've also been an IC at various places. But now you're in this weird middle ground, where you're not actually sales. You're not running a team anymore, but you're not strictly an IC either. It sounds like you've got a weird role.
Cory Watson: I like that you basically defined it by the absence of things, instead of the presence of any one thing because that's what I find tough about it. I think to pick up on the thread you've dropped there, having spent now a little over 20 years, mostly doing IC work, occasionally engineering manager, a few VP roles. In all those cases, though, there was a fairly direct connection between either infrastructure work or engineering, programming output, even as a consultant, right? I also consult all those more general programming consulting. There was always this time spent with hands on keyboard, making code pop out the other side was what I was judged on, whether it was myself or the people that I worked with to as a manager or what have you.
Cory Watson: In this role, it's tricky because I just said that a lot of my job is to go and have pre-sales conversations. I'm not a sales engineer, and I'm also not a salesperson. I'm often brought out as, "Well, here's Cory Watson, who's, as you said, at the beginning of this session, has done a bunch of observability work. He's here to effectively just have a friendly conversation with you about what you're doing." Thankfully, I don't work in a company that expects me to just shill for them or anything else, right? I tell them what I think and what the approaches are there. This is rarely a thing that you do with just one vendor. There are often a few that overlap or are mutually beneficial to each other. I think the trick, though, is trying to figure out, what do I value at the end of the day? What releases those endorphins in my brain or whatever that triggers my happiness and excitement, and makes me want to get up and come to work every day? The conversation we started with, this idea that we're learning much more about the business, and having a larger effect is one thing, but that's a long. That's like parenting. That takes decades to pay off. It's often unfulfilling, in the moment. I often joke with my partner that our daughter, she's not gonna repay us for all this work for many years. For now, we're just, "Nope, just do the kid thing." The difficulty here is connecting that. I've been spending a lot of time documenting my work, because it often hasn't felt like I "accomplished much." I'm making air quotes. I've had to spend a lot of time documenting what I am doing and learning to recalibrate my internal measure of what types of accomplishment I've had. How many conversations that I have with customers, sometimes months ago, that now materialize into someone who's a happy customer? How many conversations did I collect feedback on? Or how much insight have I given into product changes that ultimately then land and turn into something like that. I think that's maybe there's something to say here about that outsize impact as you were describing. That larger impact that you have also often taking much longer to propagate.
Mike Julian: Absolutely.
Cory Watson: The waves, even though they're big, take quite a long time to travel. That's a big difference in internal awareness of your own role.
Mike Julian: Yeah, absolutely. When you think about the vendors that we all look up to like, "Oh, well, they've made a really cool product, look how old they are." What we see now was not quick. Success looks an awful lot like drudgery. [laughter]
Cory Watson: You caught me off guard for that one. We may have to edit that laugh down. That was a snorty one.
Mike Julian: Success looks an awful lot just hard work. You're absolutely right. The success that you see in your day-to-day work, it's not actually felt for months.
Cory Watson: Also, things that can seem simple. I've been doing observability work now, since basically as long as observability's been associated with computer stuff in some capacity. Since I don't know, 2000 something, 13-ish something like that. Now, when I have conversations with some of our customers, some of that is re-discussing things that I have ... a lot of its re-discussing my past experiences, some of the decisions I made as a customer. That often doesn't feel... I feel like I'm just telling them something I already knew. I don't feel like that's valuable. I often feel, for example in this conversation, am I giving you new insights or just saying something someone else already said on Medium 100 times? The point is not to make it sound like it doesn't have value just that.
Cory Watson: You don't often see the impact until much later when you realize that that customer made some internal cultural change. I was just discussing with someone last week, how to help them make observability more fundamental to their day-to-day attributes. I asked them like, "What carrots are you providing to your team?" It's easy to have sticks and say, "We're gonna whack you on the hand if you don't measure." Are your deployment processes connected to your observability data so that you can say, "If you do it right, you get these cool features as a side effect." How much of that is really happening in your org? That's a conversation I've had many times. Sometimes, for a customer, that's brand new. Even if it is new, the other person you're speaking with often brings a whole new perspective, some really exciting new ways of thinking about the problem. I don't know. Every customer conversation is special and awesome in its own way. In some cases, we find out later they had a big impact. In other cases, they buy something else. But that's okay too, because we've all gotten better as part of the process.
Mike Julian: Something you said there reminds me of one of the ways a vendor really helps is that there are often conversations happening internally that when someone external comes in, and says the exact same thing, it lends more credibility to it because now it's not just internal. It's now someone third-party that has no vested interest is now saying the same thing. New initiatives that internal people want to do will often find traction as a result of a vendor coming in and saying it.
Cory Watson: So much so that I have had active conversations with vendors when I was a customer, I mean clearly, some of the companies that have worked for, the name of the company meant something. Then I'm something of a personality sometimes. The combination of that weight, it was not uncommon for vendors to basically call me and want me to break a tie. Not me as in Cory, but me as Cory or maybe the job I'm doing et cetera, et cetera. It really does because it's very easy. The problem with eating your own dog food, to use the industry metaphor for using your own stuff, is that eventually, it all tastes the same. Sometimes you can't tell. Do I need some cumin on this or not? Do I need more salt? I don't know, it's dog food to me. It's funny because that's also my role internally, is I work in the office of the CTO, which means I don't participate directly in day-to-day engineering. I do some R&D function. I do a lot of customer feedback stuff because I talk to a lot of customers. I also am a neutral party when it comes to this stuff. I don't work on the back end, so I'm not defensive about it. If you're listening to this and there's something you don't like about signal effects, hey, I probably don't like that either. I'm helping them understand what we could do to improve it and helping to break those ties. Also, to shape where we're going and what we're doing. In addition to the customer side, to the feedback side, to the sales side, there's also just the rank and file every day. What more cool stuff can we be doing?
Mike Julian: Yep, absolutely. On the topic of you, being a character and you talking to customers, I imagine there's a lot of stuff that's .... Let me rephrase that. What do you see the future of observability holding? What are you talking about with your customers? What are you thinking about? What are you working on?
Cory Watson: I think there's two pieces to this. When I started at SignalFX, I tweeted out one day like, "Hey, I've started doing work now. If you've got questions, thoughts, ideas, let me know." John Allspaw, who many listeners may be familiar with, but if you're not, he used to be the CTO at Etsy, now runs a company called Adaptive Capacity Labs. He talks a lot especially lately about incident stuff. How do people function in the systems when there is failure? He tweeted back to me, which first off was like, "Holy crap, that's so cool. I've loved his work. How do you even know who I am?" He says like, "Tell me what your tool does that helps us." This wasn't just aimed at me or aimed to SignalFX. It was aimed at the industry.
Cory Watson: A lot of what I've been discussing internally is, "Okay, you've instrumented your stuff. You've got these charts. You've got these artifacts; charts, dashboards, mechanism, tracing, and all these mechanisms for looking into the problem. But what are we doing to help you with that?" What are we doing to provide you with that information? I don't wanna overfit for the problem of the person who got paged, because these platforms often do a lot of other stuff as well, right? But what are we doing for the person that got paged? It's a good line of work too. I think as an industry, I feel like all the tools that we operate, we lost the person on call being the person giving the feedback about the tool. I felt there's a lot of improvement in all the vendors tools that I've used. In fact, there's even a rich third-party ecosystem of tools that basically intercept your alerts to help you provide context or deduplification. Is that deduplification? I usually hate when you say a word so many times, you don't even know if it's real. I think that's a word. There's so much more we could do in that space. What is just a simple usability of it? I remember I did a short stint as a product manager in training at Twitter. One of the things that I learned from someone who was training me, was sometimes a feature can be implemented really cheaply and simply. Just prove its efficacy. I don't think I've seen a single monitoring tool that when it notifies you, has a button that's like, "Hey, computer, this is not helpful. Please stop doing this." It doesn't have to actually take action, but it should record that sentiment, because that's really important.
Mike Julian: That's a fantastic idea.
Cory Watson: Well, you can implement it pretty cheaply. Just log the thing. Take it to a web server and log that HTTP endpoint and then just go back and scrape it together later. Then feed that information back to the folks who do your DevOps tooling, or maybe your manager. This is something that I often think we provide very poor tools for engineering managers to look at the health of their own call rotations that they're responsible for and to help guide people. It's very easy when you're on call to get caught up. These things just go off all the time, and I can't make it any better. We rarely leave time. One of the things I used to push on my team at Stripe that I think we're pretty good at is, if you're on call, part of your responsibility is to leave it in better, assuming that you have time, leaving in a better place than it was when you got there.
Cory Watson: If you're seeing alerts that are bothering you, allocate time as part of your on-call rotation to go and read, tune those. If it's a threshold, a static threshold that you need to change, do that. If the runbook's slightly out of date, make sure you schedule times for that or make tickets for other people. You don't necessarily have to take on all that work yourself, but record it because engineering managers are never gonna be able to help you allocate time for it, if it's an undefined quantity. That's something that I'm thinking a lot about. Today, specifically, I've been digging in a lot on accessibility of the tools that we operate. When people say accessibility, we often think of people who have some either permanent or temporary disability. Maybe they have an injury and lost the use of an arm or you're having a sling, or something or maybe they're blind, but we also [crosstalk 00:26:40
Mike Julian: There's a lot of those so-called invisible disabilities.
Cory Watson: Well, 8% of those of North Eastern European descent have red-green color blindness. What is the thing we all use in all of our charts to denote good and badness? It's red and green. These are the colors that those people are most likely to not be able to see. You've lost that entire channel of communication with those people. Today, for something I'm working on about dashboard design, I'm looking into accessibility. How about screen readers? What are we doing? Think about our charting elements. Are screen readers capable of deciphering those? There's also a lot of stuff that's not even that technically complex. What are your titles of your charts? I don't have it handy, but something I ran into earlier was some study done about ... or some research. I don't know if it was a study, but there was some research where someone said basically no one knows what charts are except what the title says. It's often been one of my favorite parts of observability tools where someone goes, "Well, what's in that chart?" Well, it's so and so number of seconds that this happened. They're like, "Yeah, but where?" You just have to go look in the code and find that measurement and go, "Okay, that's what that means."
Mike Julian: Yeah, I hate that.
Cory Watson: Well, I think that we do ourselves a disservice by not gratuitously labeling.
Mike Julian: Right, I agree.
Cory Watson: X-axis labels, Y-axis labels, the titles of the charts, the units that are placed there. Taking the time to basically wean yourself off of using jargon and just the assumption that the other person knows what these things mean.
Mike Julian: One of the things that has completely changed my views and perspective and really level up my own skills on the aspect of visualization was reading Stephen Few's book
on Information Dashboard Design. It’s this massive, full color, really amazing print quality book originally published with O'Reilly. He's got a second edition out. It's also fantastic. Also, he's written just tons and tons of other books. The entire thing is, this is how you should be building dashboards. This is what visualizations should look like. A whole bunch of examples of visualizations done poorly and explained why they're not working. One of my favorite things is he lays out really clear reasons why a pie chart is the worst chart ever created.
Cory Watson: Oh, yeah. I've also been ... I mean that that sounds really interesting. I just put it on my list to pick up some of the work that probably he even researched is what I've been reading lately. A lot of old theory. [crosstalk 00:29:26
Mike Julian: He was a student of Tufte.
Cory Watson: That pie chart example, this is one people love to harp on — but like, our ability to understand the amount of area contained in a pie wedge, it's pretty terrible. It works for large differences; but for small differences, less so.
Mike Julian: One of the visualization types that I really wish more monitoring tools used — Few talks about this as well — is tables are actually a very valid visualization. But we don't think about a table as a visualization. But, in a lot of cases, that's the most effective and easiest way to understand that information.
Cory Watson: In that same perspective, we also underutilize I think, bar charts. If you look at our ability to process quantitative information like the length of a bar, that's why bar charts are often cited as being superior to a pie chart because our ability to understand length is so much better.
Mike Julian: Basically, anything that you would reach for a pie chart for, it should probably be a bar chart.
Cory Watson: Yeah, but then that lends itself also to the tabular form because a table puts all the information on those X-Y axes because it's basically a chart without graphics. It's just a chart with numbers in its place. One of the things that's even suggested for accessibility purposes is to put the label for the data as close to the visualization as possible. If you've got a line chart, put it at the tip, for lack of a better word, the right-hand side of the line. My favorite, I think, thing I've learned about that, and then I'll switch topics, was that we build what are called run charts. The generic form of a chart that shows time on the bottom, or on the X-axis is a run chart. It's a special form of a line chart. My favorite thing that I read about it was going back to my Tufte books, which I have tucked away and haven't looked at in forever, but I pulled them back out for this research, was that we rely on these charts. Yet time changing is very rarely the causal part of what's happened. We have this entire visualization technique built out of showing things change over time. Yes, time is changing. That's very important to us; but rarely do we put as much thought into providing what is the context inside our organization or in our systems that are actually affecting that change? How many of those hints are we providing to people, which is the teaser for what I'm gonna try to talk about a lot, which is, how do we do a better job of instrumenting the things that are currently not instrumented and actually being able to ... I don't think causally is a word, but we correlate things often. Correlation is not causality as we all know. Causalating things is much more difficult.
Mike Julian: You and I have talked about this previously totally not on the show, where there's plenty of stuff out there that seems hard to measure or impossible to measure, but it's actually the stuff we care about the most. One of my favorite examples is measuring customer happiness. If you have a service-level indicator that has to be mapped to customer happiness, otherwise, how do you know that is going to be valid? Well, now I have a new problem. How do I measure customer happiness?
Cory Watson: We get lazy and we just don't measure that stuff. We say, "Oh, you just can't measure that."
Mike Julian: But that's not true.
Cory Watson: Yeah, it turns out there's actually some techniques for that stuff.
Mike Julian: There's a lot of really interesting techniques. Unfortunately, we could talk for hours about that topic alone. Perhaps, I'll have to have you back again sometime soon and we can dig into that one.
Cory Watson: Yeah, the only other thing I'm poking at in this realm is control theory, which is the roots of classical observability. Not necessarily what we're talking about in computers. It's something I've doubled down on recently. It's pretty common when you're demonstrating or when vendor show you this stuff like, "Here's a chart, here's it behaving badly, and here's some effect that we're having on the system." If you move forward into a more automated world, most modern, large scale, industrial things are automated to the point that things like control theory are what govern them. I'm very interested in why we still so heavily rely on intuition and people to do a lot of the operational work that's on the Ops side of the DevOps equation. How much of that could be improved if our systems were more, drumroll, controllable?
Cory Watson: Observability is how measurable something is and whether or not the state of its internals can be inferred by summing together its outputs. What we don't have are systems that are easily controllable; how many of our systems have direct API's for influencing those knobs and levers that govern their operation? That's something I'm really interested in because as we increase the surface area of our applications that we can be controlled, if imagine them as a three-dimensional space of good configuration and bad configuration, how much can we automate a lot of that stuff and make it? There are people out here doing a lot of stuff in terms of real math and science on systems being safe. Little's Law is a very commonly-cited one for queuing. The Universal Scalability Law and a lot of other performance-minded ideas that can actually be applied to the things that we're doing, but our systems very rarely allow us to manipulate them in that way. Instead, someone's gotta go edit the YAML files, save it, upload it, stop it, start it, redeploy it, blue, green it. How much could we be doing if these things were a little more automated? I think that's something that it sort of solves that dirty secret that observability has, which it tells you something's wrong, but not necessarily what it is. That's probably the third thing I'm poking at a lot these days.
Mike Julian: Oh, that's fantastic stuff. I'm hoping that you're going to be writing a bunch of blog posts and giving talks about all this.
Cory Watson: Yeah, that's definitely something that's in my ... I don't have a contract, but it's in my proverbial contract is that I'm supposed to be writing about a lot of this stuff. In between talking with customers and assembling some of this, I think over the next year or so, you'll see a lot more that come out. I am definitely currently working on how to make better dashboards.
Mike Julian: Wonderful.
Cory Watson: As inspired by a lot of the stuff we're talking about here, I'm gonna have to totally check out the book that you mentioned earlier.
Mike Julian: Awesome. Well, where can people find out more about you and your work?
Cory Watson: Well, you can find out more about me either on my personal website, which is onemogin.com
, O-N-E-M-O-G-I-N. That's the southernism for do it one more time, if you haven't read it before. Then you can also find me on Twitter @gphat
G-P-H-A-T. Both of those places, and I do a pretty good job about babbling about both of them. Usually, though, Twitter's probably the right place. If you can deal with all my retweeted hilarious, weird Twitter jokes.
Mike Julian: Yes. All right. Well, thank you so much for joining us on the show.
Cory Watson: No problem. Thanks for having me.
Mike Julian: To all the listeners, thank you for listening to the Real World DevOps
podcast. If you wanna stay up-to-date on the latest episodes, you can find us at realworlddevops.com
and in iTunes, Google Play, or wherever it is you get your podcasts. I'll see you in the next episode.