Jessy Irwin is the Founder at Amulet. Prior to this role, she ran her own consultancy, Jessysaurusrex LLC, for seven years, worked as a vice president of privacy and security at a privately owned public affairs firm, and was a security empress advocating for password managers at AgileBits, Inc.
Join Corey and Jessy as they discuss the best job title in the world, how majoring in art history was the best life decision Jessy made, why security needs to be as mundane as vacuuming the house, what Jessy is doing to make security more enjoyable, the role consumer branding plays in the adoption of security tools and practices, why Jessy thinks security problems are akin to lifestyle choices, why security practitioners should be focused on raising the cost of an attack, one of Jessy’s endless frustrations about working in blockchain, why Jessy generally avoids using the b word, and more.
Screaming in the Cloud with Corey Quinn features conversations with domain experts in the world of Cloud Computing. Topics discussed include AWS, GCP, Azure, Oracle Cloud, and the "why" behind how businesses are coming to think about the Cloud.
Transcript
Announcer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.
Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Jessy Irwin, who today—doesn't matter at all what she does today because she used to have the best job title in the universe: security Empress at 1Password. I just want to let that sink in for a minute. Jessy, welcome to the show.
Jessy: Hello. Thank you so much for having me.
Corey: So, it's got to be challenging to know that when it comes to job titles, you have peaked, not just as a person but for the entire industry a couple years back, and it's all sort of downhill from there. But what are you up to these days?
Jessy: Yeah, I'm sad that I'm no longer the Empress in my previous place, but I actually have decided that I wanted to set up my own organization to work on security problems. So, these days, I'm not sure if I'm technically Supreme Ruler of the Amulet Universe, but I'm working on my own project where hopefully I can help make security stick better for people. That's my catchphrase: “Help make security stick better.”
Corey: I like that. You’ve become famous in small circles which is, I guess, probably the best way to put people who are big deals on Twitter. But what's always been interesting to me about your approach to security has been the human-centric piece of it, where it's not about trying to talk about the far-future advanced persistent threats, although you certainly can do that, but more along the lines of how you effectively raise the security bar for day-to-day, folks. What got you to focus on that?
Jessy: So, this is the part where I get to tell you that majoring in art history in college was basically the best life decision I've ever made. And I'll tell you why. Art history is interesting because you have to study objects and images, you have to be able to do analysis, especially technical analysis. But when you step back, you are looking at objects that represent societies, and cultures, and lives. And what I remember most about my art history classes and my time as an archaeologist in college is really that people have been engaging in security behaviors pretty much ever since human settlements started.
We've had to protect ourselves from each other, and from external threats in so many different ways, and risk management is something we've done long before computers ever happened. Unfortunately, computers make everything easier to do, especially remotely. So, the same problems we had a very long time ago—keeping our coin hoards safe, for example, we still have those, and it's easier than ever for somebody who wants to separate you from your identity, your data, something that is valuable or important to you that, online, to do that. And I just really think that a lot of times the focus is too heavy on the technical side.
If we're talking about PGP and ZTRP, and we're throwing the alphabet soup together, we're really forgetting the part where somebody just wants to pay their online power bill, or somebody wants to log into their bank account, and know that they're not giving another person all of their money, or all of their personal information in a way that will harm them. And I think that's way more important, and really the core of what we should be doing, instead of engineering these perfect invisible systems that nobody understands, and everyone has to become an engineer to use.
Corey: And that's always been, sort of, the weak spot of security. It's not the advanced super deep-dive breaking into things. It's the fact that someone isn't trained and falls for a spear phishing attempt, and emails the company payroll to someone. It's the human side of people entering their credentials into the wrong website, and it's always seems like it's never the big stuff. In the world of cloud, we see this all the time, whereas you have the S3 bucket negligence story of people failing to secure their S3 buckets, and instead exposing company database backups, people's social security numbers, etcetera.
Then you also do see the more advanced attacks like the one that Capital One was subject to, where there was effectively four or five different misconfigurations that were then chained together in order to result in something kind of neat. But to the outside world, those two things look the same, but they're very much not. It comes down to fixing usability. I've spent an awful lot of time trying and failing to find a legal way to patch humans, and I've never been able to do that. Is this problem ever fixable? Is this something that we're going to continue to see iteration on, on the human side, without getting anywhere? Or is there light at the end of that tunnel somewhere?
Jessy: I'm a little optimistic about this. I hope that after realizing that we can only create so many protocols and so many new whiz-bang code things, that the code is not the answer is now really starting to hit people in the face, or in the feels, or wherever they need to be hit to change their point of view. But ultimately, we have two problems to solve. All security is actually behavioral economics and policy that you have to stick together and align towards a specific outcome. And I think right now, every company is essentially its own little nation-state with its own little national security stance, whether they've got a thousand security engineers keeping one of those many-numbered threats out of everyone's email in the morning, or whether they're a small business down the street.
And it's our job to make security into something that is part of your launch checklist, or your productivity tool, or so normal and so mundane that it's like the cyber equivalent of vacuuming the house. A lot of people refer to what we should be doing as setting up cyber-hygiene programs. That's cool, but we also need to make sure we are thinking about what the people abiding by those programs, or following them, would actually you need to do. You're going to get, realistically, 30 seconds of attention from someone. Even on YouTube, someone bounces from a video after 12 seconds, if it looks sort of boring, so when you think about this problem overall, and this war for attention that we've created with technology, plus all of the new products that come out and all of these sneaky side menus and configurations you have to know, there's always something more to do. And there's always another way to spend more hours of your life trying to secure something that you should. It would be nice to just have 10 commandments that we focus on. And for those of us who are in a position to build products, and to work with product teams or product managers, to just take the core security stuff, put it at the top of the list, and get it done with as early as we can so that we're not all having to freak out and become firefighters and incident responders, with or without tons of resources.
Corey: The challenging part that I found across the board with infosec as a whole, is part of the reason that I've always found you to be such a refreshing voice in the space, is that by all perceptions, from everything you say online, you have an incredibly rare skill in the infosec space, by which I mean, you are not a massive jerk to everyone. There's definitely an asshole problem in the world of infosec. And it's something that you have never exhibited that I've ever seen. How is it that it is, first, so difficult to find people who aren't being obnoxious in the world of security, and, two, how have you avoided it?
Jessy: I think that everyone has an opportunity and a choice about whether they want to be an asshole or not. I tried really hard not to be a giant one but, more than anything, an attitude that has been exhibited to me over the past 10 years I've been playing in security, and the past seven years where I've had direct jobs in security, there's a lot of gatekeeping going on. I mean, I come from a background with lots of humanities, and creativity, and writers, and I love that, but ultimately, the world is a better place when we have more people thinking about these problems, not less. And the attitude that I've seen come from the community around security, and a lot of the industry around security has been to use some of the stupidest things you could ever come up with as a way to intimidate someone from taking a first step into learning more or getting interested because if you have more people who aren't like you join the industry, people who've been around the longest, or people who feel like they get power from their roles, lose that.
And I get it. That's scary. But this is a specific problem where we need to be making friends. Like, we should be in a land grab to make everybody think that two-factor is the coolest thing on the planet, and we've got to be creative about it. Instead of two-factors of authentication, maybe you need two raptors running after an attacker who tried to log into your account if they can't authenticate correctly. That's way more fun to think about. But everything is so serious and end-of-the-world all the time and, I don't know, that just doesn't seem like a group of people I want to hang out with, and it certainly doesn't seem like the way that we recruit and onboard the entire planet into making better passwords and changing their behavior online.
Corey: One of the most transformative things I've ever done for my own personal security was getting a password manager that I could just shove everything into, and then eventually spending a very long few days at previous jobs—when I didn't want to do my actual job—of going through and rotating everything to a unique password. The benefit there is I don't have to remember anything except that initial password, and when one place gets compromised, I now don't have to worry about changing that password everywhere in creation across 400 different websites. What’s surprised me is how easy it's been to get other folks who do not spend their lives working in the world of high technology on board with that. Things like Apple supporting password managers natively in iOS where it just auto-fills from a lot of those apps that are out there has been transformative for an awful number of people. It definitely demonstrates your point of the tide rising. What's next along that axis? I mean, you can always talk about how to get folks who are deeply technical sorted out, but how do you fix this for, I guess, the people who have, you know, real jobs?
Jessy: Yeah. So, this is the hard part. Essentially, we have been very good as technologists at developing broad appeal among our own, but this is actually a consumer branding problem. We have to build a culture around the technology that we use, and we build to make these things that take a lot of extra time on setup, cool, and fun, and worth it. And what that really requires is for us to know our user, and to know our audience.
The conversation that I have with my 72-year-old dad versus the conversation I have with my 27-year-old little brother-I think he's 27, at least—those are two totally different conversations. And when I have to talk to them about why this is important, their incentives are hugely different. If you are talking to your family, and they can remember a time in your life when you were in diapers, they're probably not going to listen to you very often, especially on technology advice. It's why Thanksgiving and all the other holidays can be tough when somebody pulls out a phone and wants tech support.
But we should be able to tell someone who doesn't want to be an engineer why they should do this. If it's for a mom, more often than not, it's a way to take care of your family. Grandma likes taking care of families, too, let me tell you. If you're talking to a student who's never had anything completely terrible happened to them in their life, but you're sending them off to college, and they need a plan for taking care of Social Security cards, and identity documents—really important stuff to do—not for right now, but making it about investing in their future, and making sure that no one else can hurt them when they're on their way up in the world and finding their footing. We have to be able to talk to everyone about why all of this technical security stuff is worth it, even when it fails, even when it's a pain in the butt, even when you really just want to reuse that one stupid password because the password manager is not working and it won't generate or fill the right way because some mean developer made your fashion blog website completely unusable in mobile.
We have to be able to at least incentivize people to do more of the right thing, and maybe not even the right thing all of the time so that we make continuous progress, and we see continuous improvement, rather than trying to get everyone to be perfect at everything at all times for exactly the reasons that we think it's important. Our threat models, as technologists, are completely different than the threat models of everybody else around us. But often, because if you're on a security team, you've seen these advanced threats, maybe you do get a point of view that the password manager doesn't work because the attacker is going to hack into your operating system and steal the plaintext out of your memory, blah, blah, blah. But that's not a reality most people face, ever.
Corey: Especially in a world of Cloud. I mean, it seems to me that a lot of the best practices, like encrypt everything at rest, that made an awful lot of sense, for example, your laptop, or for your data center where there have been numbers of stories of people breaking into improperly secured office data centers, or driving a truck through a wall and grabbing a rack into the back of it and taking off. Good luck doing that with one of the hyperscale cloud providers. But it's hard to get those edge cases explained across the board. So, in many cases, just saying do it all as a best practice seems to be the path forward. It's giving people fewer decision points. If you give people three things to do, they'll generally do them. Give them 100, they'll do none of them.
Jessy: Exactly. And something that I see a lot actually—so I think too much about this—but when you look at health advice, when you look at wellness advice, sometimes you will run into an article written by a doctor who knows to tell you five things because you're only going to do two, and realistically one of them will stick within a month of reading an article. Other times there's a checklist of 283 supplements and things you should be eating, and blah, blah, blah. And at the end of the day, what all of these security problems actually boil down to are lifestyle choices in the same way that some of our issues with healthcare are also lifestyle choices.
I can talk to a small business owner and ask them security intake questions, and just like any other survey, they're going to tell me all of the nice answers. When I actually get in to do the work, I'm going to see where they've done the technical equivalent of having cake for breakfast, and fries for lunch and dinner every day. And it's okay. I mean, that's reality. But unfortunately, I think a lot of us are happy to portray an ideal lifestyle, and we don't actually talk about the lifestyles that we actually live on the security front, and again, it sets this impossible standard, and it makes it really, really hard when you essentially have people writing fanfiction about the 283 steps they take to secure their home and family, when in reality, it's probably, like, 15 or 20. And maybe one or two of them you're a little lax on.
Corey: And that seems to be the, I guess, the message that gets lost, where unless you're doing all of these different things, you're going to be in incredible danger. There was a talk I saw once—I wish I could recall who gave it—where there are two threat models to be concerned about Mossad and not the Mossad. If it's the Mossad after you, you will die. I think was James Mickens that may have made this observation, but please correct me if I'm wrong. If it's not the Mossad, well, there are things you can do.
It's about raising the bar for what it takes to compromise someone. At some point, if people invest enough resources, they're going to wind up breaking into anything you've got. The question is, is what is that bar? If your password is the same thing everywhere, and it's just the word kitty—sometimes an exclamation point at the end—then maybe you should try and raise what it takes a bit further. But past a certain point, it winds up mattering less and less. An easy example of that would be—I’ll ask you this: is there a material difference between whether I have 40 characters in my password, or 50?
Jessy: I would say yes, but half of that difference depends on what is going on on the back end of a web service you might be using or server infrastructure, and how that's been designed. So, maybe you don't actually know as a user, and it never comes up to you. On the other end, maybe it's no because your extra 10 characters are all zeros, or they're the same word twice. That's really easily undiscoverable. There's a quality piece there that is really difficult to judge. There is a numerical difference between the password strength of 40 characters versus 50 characters, but on the other hand, if you're reusing the same word five times to get to 50 characters, maybe it doesn't matter. Maybe there's really nothing there, and it's going to get broken easily anyway.
Corey: This episode is sponsored in part by ChaosSearch. Now their name isn’t in all caps, so they’re definitely worth talking to. What is ChaosSearch? A scalable log analysis service that lets you add new workloads in minutes, not days or weeks. Click. Boom. Done. ChaosSearch is for you if you’re trying to get a handle on processing multiple terabytes, or more, of log and event data per day, at a disruptive price. One more thing, for those of you that have been down this path of disappointment before, ChaosSearch is a fully managed solution that isn’t playing marketing games when they say “fully managed.” The data lives within your S3 buckets, and that’s really all you have to care about. No managing of servers, but also no data movement. Check them out at chaossearch.io and tell them Corey sent you. Watch for the wince when you say my name. That’s chaossearch.io.
Corey: An argument that I heard once, when botnets were on the rise, was that at some point you have to begin assuming from a security posture that whoever it is that is attempting to break in will, more or less, have infinite computing resources to throw at this. So, the answer starts instead becoming things like two-factor auth, or as you correctly pronounce it two-raptor off. Tell me more about that.
Jessy: Yeah, so the main goal of security, it's not to keep everyone out all of the time. If that's our goal, we're going to fail at it, and we should never take any of these jobs or even bother, quite frankly. But the main thing that's the most important to do from a security perspective—whether you're a farmer 50,000 years ago, or you're the guy holding the keys to the Vatican art galleries—I promise I'm going somewhere with this—it's important to raise the cost of an attack. What's really interesting, especially online, is we figured out that passwords are the weakest link. They are a huge privacy problem. They're a huge security problem. So, essentially, we need something else.
We all came together and decided that we would make sure every computer on the planet had two velociraptors that were trapped inside, and in the case of an attacker coming to try to steal your passwords, they'd be unleashed and they'd go eat his face off. Or at least that's how I like to explain it. In reality, we needed something else. We needed another layer of defense, and the best thing we came up with was, I guess, a rotating 6 to 10 digit code that lasts for anywhere from 30 seconds to 5 minutes. It's very hard for an attacker to steal from you or to take away from you, especially if you're using a physical security key that produces those numbers automatically.
I kind of joke that with two-factor authentication, instead of just one password, now we have two because it is basically a one-time-use password in most cases. But we're always going to find that we need to add an extra layer or a something else. It's just a question of how much of something else will our users be willing to sustain, and to do? And when should we remove the burden of doing a something else from them and integrate it into what we're already building?
Corey: And that's part of the problem. It's not always about how to build something new, and exciting, and revolutionary, but how to retrofit that back to real world problems that people have. Now, that brings us, of course, to blockchain. My argument for a while is that it's a neat technical trick that is struggling to find anything approaching a business model for the past decade that doesn't revolve around speculation or scamming people. Where do you stand on this given that you actually work with it in capacities that aren't just making fun of it on Twitter?
Jessy: Honestly, I agree with that assessment. One of my most endless frustrations for the past two years of working in the blockchain space has been watching people pay more attention to coins, and their value, and their worth, versus some of the fun things that we're actually engineering with code. And the hype machine is, frankly, incredibly annoying. There's some really interesting things we get to play with in blockchain that I don't think anyone really realizes. We get to play with virtualization; we get to mess with encryption; we get to do all kinds of exciting encoding and decoding things.
I feel like in some aspects, there's actually been a huge contribution from blockchain-land, to web application security, just based on the fact that we offer built-in bug bounty programs for any code holding coin. There's also—non-joking—playing with distributed networks; playing with the concept of decentralization, which really gives security properties resilience and redundancy to networks, which is really exciting. But overall, the big disappointment is just watching people try to slap a blockchain on any problem that exists, rather than stepping back and thinking about some of the properties that you can get from what exists, or how you can tweak what exists to get something new, and refreshing, and exciting we haven't really seen before.
Corey: And that's what's interesting is the idea of exposing new and exciting things that until now, we did not have the capability to solve, is incredibly promising. And I love the idea. The problem is, is that so far, most of those examples revolve around well, there's no central authority that we can trust. Well, we tend to live in societies where, for better or worse, there are parties out there that even if don't we don't agree with them personally, they are sovereign entities who are empowered to enforce what they want to do, so you, sort of, have to trust them. There are stories for removing middlemen from transactions—or middlepeople from transactions—that tend to be compelling on one level, but in practice, there are a lot of entrenched interests that are going to fight explicitly against that. I think that rather than looking to necessarily supplant existing structures with a complicated by-in story, finding new and exciting things that this empowers is probably the right answer, but I see less of that than I would like.
Jessy: I totally agree. Personally, I might not be salty enough to work in security when I say this, but eventually you have to trust something, somewhere. There's going to be a Root of Trust in anything you build, whether it's cryptographic, whether it's reputation based, it's just going to happen. You make the decision to trust, or to buy-in based on a something somewhere. These things don't develop in a vacuum.
So, trying to watch people basically deny that trust is required, and then go build a trustless universe has been this Kafkaesque adventure that I've been on. But more than anything—okay, the first exciting application everyone came up with for blockchain that was sort of meaningful was supply chain. We see all of these case studies from IBM, and we see case studies from Walmart, and major retailers, and major regional grocery chains, who are using blockchains so that when you scan a QR code on a product, you can see where your lettuce was grown, and all the facilities that it went through, and how it made its way to your table. That's really exciting.
Do I know if it requires a blockchain versus a transparency tree? I'm not totally sure, but okay, fine. Keep going with that. I'm totally down with more technologies that can empower transparency, especially in an end to end situation for food, or medication, or agriculture, where the choices we make impact the future of our planet, and the health of our bodies. That's fine, but what I think is really being missed in all of this blockchain cryptocurrency hype is the opportunities we have in some other places.
Microsoft has done some incredible work on distributed identity, and decentralized identity. There are so many opportunities for tamper evidence, and resilience, and even sharding identities, that blockchain technology can give people to play with. And especially given how hard identity problems are to solve in security and in computing, it would be nice to see more people besides just Microsoft get laser-focused on where the opportunities are and what we could build out of that playground. On the other hand, one of my favorite applications of blockchain has been watching all of these different mesh networking technologies essentially plug themselves into blockchains, and to enable an entirely new decentralized infrastructure for the internet. From a security perspective, watching major protocols get hijacked.
Watching all of these cloud providers have massive DoS attempts, just get thrown at them all of the time, I would love to see more resiliency and more decentralization of our network, so that they can be tamper evident and more fault tolerant. But I don't see enough people getting excited about where else we could go with this that has nothing to do with money and more to do with resilience and making sure that we're not just all making these internet companies and these services that are too big to fail, and when they get attacked the right way, they fall over and that's it. I mean, everyone freaks out when Twitter goes down or Slack goes down. Or Zoom, by the way. It's like the end of the world. But how cool would it be to say maybe this internet we've been building since the 70s needs a rethink. Maybe we should play with mesh networking more. Maybe we should think about how we are connecting the rest of the world. And maybe that model doesn't need to look like what we've been doing previously.
Corey: That was a fascinating question is, what do those new models look like? There's a question of, we should build out these new formulas, these new structures, these new approaches. It's just hard to find people that are genuinely doing it. Blockchain has fallen into almost the category of punchline, in the same way that AI and machine learning is, to the point where, when I see someone talking about a blockchain-derived-machine-learning-powered-serverless organization, oh, you're trying to scam money from VCs.
Why didn't you say so? I know the secret handshake, too. And it seems that very little transformative winds up being derived as a result. It winds up, from my perspective, tarring a lot of good faith efforts with a somewhat ridiculous brush. I mean, one of my more obnoxious tweets on this was, if I had somehow come up with a terrific, transformative, legitimate usage for blockchain, I would go significantly out of my way to avoid referring to it as blockchain so that people would take it seriously. It's an ongoing problem in the space, to a point where it's almost impossible to have a serious conversation about it without some subset of the population rolling their eyes and tuning you out.
Jessy: Yeah. This is a problem I've dealt with for the past couple of years. When I want to talk about what I'm working on, I don't use the B word. It's a bad word. In the blockchain industry, and in the communities around all of these different coins and network protocols, people double down on that blockchain culture that we've heard all kinds of stories about, which is really difficult because it's hard to create broad appeal for people who do want to work on some of the engineering problems in this space that are super interesting.
On the other side, it would also be great for people to just suspend the jokes for five seconds, and think about when we've seen this behavior before. I remember, what, a decade ago when people started really getting into the Cloud, every security person on the planet was just like, “Oh, it's not the Cloud. It's somebody else's computer.”
Corey: Yeah, that is a tired trope at this point.
Jessy: It is. And I think about it this way: the point of security, and I think the point of technology, is supposed to be that we make cool, creative, amazing stuff happen, even if it's sort of wild, and a little nutty, and you have to suspend some disbelief like it's some movie. But on one side, I remember everyone criticizing the Cloud out of existence or so they thought. And I remember the race to the bottom for the jokes, and, “Oh, who's going to use that?” I also now look at the environment in the space, and I see security engineers pulling their hair out because instead of running to the front lines and trying to figure out how to get involved, and how to move things forward, and to get security in at the very core, they just made a bunch of jokes, and dug their heels in, and thought that saying no was going to be enough.
And from a security perspective, I think this is a huge industry problem, but also, you're not going to criticize something out of existence. On the Cloud side, look at the market cap of Google Cloud and Amazon. Look at all of the Cloud bills people pay. I think I even pay two different cloud providers right now—
Corey: That you know of.
Jessy: Yeah. Two that I know of, technically. But I feel like in the blockchain space—not the cryptocurrency part, but the blockchain part—there's billions of dollars hanging out over there. People are funding research, and trying to at least, have a creative, experimental place where we're trying to figure out how to make things better, and play around a little bit. When that used to happen, when people did it in their garages in the 70s, it's totally cool. And now, it's kind of bad, and awful, and evil, and we shouldn't do it, and it's a huge joke, and I don't quite get it. There's a bit of a disconnect there.
Corey: I would agree. But I think this also gets to one last point that I want to talk to you about, which is, how do you, I guess, evolve the mandate from the way that security currently is from this idea of being top-down—command and control everything—to being something that helps people get further, faster? I mean, how many people do you know who wind up effectively having a second computer to run the antivirus suite that their company mandates that they run, and they have something else that they do their actual work on, then copy it over? I mean, it completely defeats the point.
Jessy: So, something I think a lot about is how, in development environments and engineering teams and even among product teams, the Cloud coming after us all has essentially changed how security teams have to work with the rest of an organization. Theoretically, I think it's a major failing that we take all of the riskiest, hardest, most complicated things away from our developers day-to-day, and we shove them onto this team from the side that's usually understaffed and probably under budget and never going to be able to get ahead of an entire organization, and we make it their problem, and nobody else has to think about it. That is so wrong because what are we supposed to do? Have a 10 person security team in a 500 person company, essentially split up and be in charge of enforcing X number of employees across Y things? It's a recipe for burnout.
What we have to stop doing is looking at our jobs as control, and power, and doing the things that are not great. So, many security teams that I've interviewed with have said, “Hey, actually, we're so powerful, we can stop a product release.” And I don't know if that sounds like power, or if that sounds like being a jerk, because frankly, if somebody worked for two years, or a year, or however long your product cycle takes, to ship something, and you had no involvement in it, or you had minimal involvement in it, but at the last minute, you can throw your foot down and stop it, nobody's ever going to want to work with you again. What we have to get better at doing is building coalitions, making friends, aligning our incentives with one another.
And we also, a little bit, have to get to know the people around us. We can't just all hang out with our hacker buddies at conferences, and think that we're changing the world. We need to go hang out with marketing because they control reputation, and reputation is kind of a big deal. It's a huge asset for a company. We need to go hang out with operations; we need to go hang out with finance, and all of the critical functions of business, who maybe need it support, or who are going to always be looking for shortcuts, because they're understaffed as well, and we have to learn how to advocate for them.
We have to learn how to make things easier for them. And we have to not shame the crap out of them if they get something wrong. Security teams get it wrong all the time. That's why we have all of these issues with data leakage, and data breaches from AWS buckets. Yeah, they might get shamed by their colleagues on Twitter, but everybody else is probably too afraid to speak up, and to try to advocate for change with them. We have to be better ambassadors, and we have to be better at building teams and collaboration, or we are going to fail so miserably, that at some point, it's just going to be too expensive for anybody who's not a company with their own cyber military to actually go online and do business. And that's not okay. That's not what the internet was for.
Corey: What's the role of your CSO? Uh, mostly to sit in an office and play with a desk toy until the next data breach, and then they get ceremonially fired and replaced. That's not a viable outcome, even though that seems to be some company's actual strategy.
Jessy: Most people that I know who have been in a CSO role, especially actually in blockchain space, they get asked for policy all the time. As if writing down a bunch of rules is going to be what protects you from an attacker who doesn't give a crap about any of your rules.
Corey: As my primary IDE PowerPoint, that's usually not the right answer for a lot of these things.
Jessy: Exactly, exactly. And it's just really unfortunate that, again, we take these people who do security work, and we shove them all in one team instead of embedding them, or putting them in a position where they can educate, and advocate, and also build in technical reinforcements, and technical support, and monitoring, and metrics, and visibility. The answer isn't to shove everything into a black box security team; the answer is actually to make the creation process for whatever you're building and whatever your business is, more internally transparent, so that when an issue pops up, the humans who are good at identifying risk—security team or not—have a place to go and can voice that because more often than not someone closest to a business process, or closest to critical code knows where the bugs are going to live anyway. And maybe they don't know all of our vocabulary words. Maybe they shouldn't have to go memorize an infosec dictionary to make a point, or to surface something. And that's really the mindset that I think more security people need to have. We shouldn't force everyone into our worldview. We should be able to have someone describe a situation the way that they might describe an ache or a pain to a doctor, and work from there to diagnose what's actually going on, not just throw a hissy fit and tell them to stop, I don't know, looking at cats with cheeseburgers on the internet, because that's where the malware comes from.
Corey: I think that it always comes down to meeting people where they are. And we see that in Cloud, we see it across the board with user behavior, and these problems aren't getting smaller. They're definitely getting larger. If people want to hear more about what you have to say on this and countless other topics, where can they find you?
Jessy: I will resume yelling on Twitter again soon about these topics. I've taken a bit of a hiatus because I've been in creative build plans, and conquer part of the world again mode. But I blog at my website, jessysaurusrex.com, though I don't do it often. And every once in a while a cool person will invite me to their podcast, and I can rant for a while. Another place to look would be to look for some of the keynotes or talks that I've given at previous conferences, especially if you don't travel for conferences quite a bit. That content I try to make evergreen and helpful.
Corey: Thank you. And I will throw links to that, of course, in the show notes. Jessy, thank you so much for taking the time to speak with me today. I appreciate it.
Jessy: Thank you so much. And don't forget to turn on your two-raptor authentication, if you don't have it on already. The dinosaurs will thank you. They are hungry.
Corey: Jessy Irwin, former Security Empress at 1Password. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts. If you've hated this podcast, please leave a five-star review in Apple Podcasts, and then be sure in the comments to leave your date of birth and mother's maiden name.
Announcer: This has been this week’s episode of Screaming in the Cloud. You can also find more Corey at ScreamingintheCloud.com, or wherever fine snark is sold.
This has been a HumblePod production. Stay humble.