Cybersecurity is complex. Its user experience doesn’t have to be. Heidi Trost interviews information security experts about how we can make it easier for people—and their organizations—to stay secure.
Human-Centered Security: The Book
Heidi: Hello everyone. Welcome to human centered security. I am Heidi Trost. I'm usually your host, but actually today John Robertson is with me and he is going to interview me about my book, which I have right here to show everyone. Oh, so fancy. Um, yeah, so I'm, I'm going to be in the hot seat today and John is going to interview me.
Me. So, thank you, John. Thanks for agreeing to do this roast. It is a roast, right? Of course.
John: Yeah, and I am getting paid in free books, right? Yes.
Heidi: Yes. Yes. Yes, of course you are.
John: Well, awesome. I'm glad to be here, uh, talking about your book. Um, you know, we did a book a reading club or a book club for a while and I feel like you did kind of delay the publishing so we didn't read it in that book club.
Is that right?
Heidi: That was the whole reason is I didn't want to discuss it in the book club. So I just kept on delaying it and delaying it. Finally, my publisher was like, look, you're going to have to do the book club one way or the other. So here we are.
John: So here we are to, to really dive deep into it. So let's just get to the question.
So can you just start with why did you write human centered security?
Heidi: Yeah. And you ask it this the way that most people ask it, which is like, why, why would you be so interested in a topic? So boring? Well, let me tell you, John. So I wrote human centered security because I found myself, so, so what we do, so John and I are both UX researchers, we observe people for a living, which kind of sounds weird when I say it out loud.
We're trained to observe patterns in human behavior. And some of the things that I noticed as I was doing user research, just as part of my job was I observed security behaviors. And often those weren't even part of the the security, or they weren't even part of the research that we were doing, but I was noticing how people were reacting to, um, different security scenarios, or maybe they would be asking me questions once they kind of realized, you know, I was tech savvy and knew a little bit about security.
And what I, what, what annoyed me was that I didn't always have the answers to the questions they were asking, or I didn't really understand why certain things were happening. And I just felt like I didn't even have a vocabulary to talk about it. And that really frustrated me. And the more and more that I, I dug deeper and I learned more, I realized, wow, this is truly security is truly a user experience issue.
What people do what people don't do what people understand what people don't understand all of those are user experience issues and they impact users at every single point of user journey from the point where they're evaluating your product to sign up to all of those, you know, security emails that they get from you when they can't log in again, you know, it's it's happening all throughout that journey.
And once I realized that, I was hooked. I was, I was down the rabbit hole and there was no going back.
John: Yeah, it's, I feel like as a UX researcher who's done a lot of research in this area, you do get a lot of those awkward questions from participants.
Heidi: Oh, you get them too? I'm so glad I wasn't the only one.
John: Oh, yes.
Heidi: And it's, yeah. Like, how do you do this? And you're like, I don't know. It's baffled me, too.
John: Yes.
Great, easy, yeah. Um, I mean, I guess let's get into, you know, who should, who should be reading this book? Who are, who's going to be buying this?
Heidi: Yeah, so the book is for people building digital products, primarily.
So those are UX, UX people, you know, UX designers, UX researchers, content designers, by the way, content designers, if we really, really need you in security, but it's also for products managers, it's for engineers, for data scientists. And it's also for people who may not have like products in UX in their titles.
So maybe those folks are working internally at their organizations and They are maybe leading a behavior change initiatives centered around security. So like maybe it's security awareness. Maybe they're on the IT team and they're implementing different security controls. Maybe they're part of the compliance team.
So these folks are influencing the security user experience. They just may not have UX in their title. They may not have thought of themselves that way, but they are, they are influencing security behaviors, whether they like it or not.
John: Yeah, and I know a lot of the guests that have been on this podcast are cyber security people thinking in a very user centric way as they deal with some of the issues they've found.
Heidi: Sure. And I should have added cyber people on cyber cybersecurity leaders, you know, people on security teams, those, those folks also impact the security user experience naturally.
John: Yeah, definitely. I mean, I guess we could go back here a little bit. So You know, a designer might say, I'm just a designer. What do I, what does my role have to do with security?
What's your response to that?
Heidi: That's actually one of, one of the number one questions I get asked one way or the other. It's, you know, just like what. What is my role? Like this, this all sounds great, but what, you know, I'm just a designer. I don't know anything about security. Well, my answer to that is that you are, whether you like it or not, designing for the security user experience, even if you've never thought about it that way.
So if you are talking about trust, if you're talking about privacy, if you're, you know, actually talking about security, even on like your marketing webpage, and you're explaining that to end users, that's part of the security user experience. If you're designing login pages, if you're encouraging or just not even talking about MFA, you're, you know, influencing security behaviors.
All of the communications, messages, warnings, pop ups, things that surface, bubble to the surface and impact users that have to do with privacy or security or trust, those all impact the user experience. And I just want to clarify one thing. So a lot of times your end users will not necessarily use the word security or privacy, but they might say things that center around trust and those are still security and privacy issues.
I want to trust you that things will go as planned. And that's part of the security user experience. Um, so what, you know, my, my goal with the book is to help product people and designers in particular, really understand that you are in a unique position, you know more about security and privacy specific to your product than your users ever will.
So because you're in that situation, you are empowered to help prevent bad things from happening. You're, you're in a position to anticipate and bolster your defenses against threat actors. And you owe it to your users, right? So you're in this position where you know more so you really have a responsibility to protect your users They will never know as much about your product as you do including the security aspects of it and privacy
John: Yeah, I'm kind of curious, you know, for either people just starting out or, you know, even teams with just limited resources, you know, what, what would you say is the best place to, to start? Like, how do you just, you know, what's a small thing that they can do now to to improve the user experience.
Heidi: That's a great question.
I would say figure out where security bubbles to the surface. So I describe it if you think of like a horizontal line and all of these things happening below the surface, you know, all those bits and bytes and all the things that the user just never even thinks about probably doesn't even know exist or knows that they're happening.
But there becomes a point where All right. things, you know, security bubbles to the surface and does impact the user experience. So defining what those moments are, that's usually the biggest hurdle. So a lot of times UX people don't even realize where these things are happening. So the first thing is just to be aware.
And again, as I said at the beginning, it's happening across the entire user journey. So if you're thinking about sign in or login, um, when the user is setting up or configuring a device, usually security is part of that, right? We've all been part of that. We're setting up a new device or a new computer and it's asking you all these different security settings and you're like, Oh, so, so much.
No, please go away. I'm just going to check. Yes. You know, so I can proceed. Um, anytime the user receives. Some sort of communication, whether it's a warning or a pop up, or it's an email related to security or privacy, you might get emails like when a new device logs into an account. That's an example. Um, whenever the user has to make a security or privacy related decision and places where the user has to decide who or what to trust, and that is becoming increasingly important with AI, right?
Where it is, it's much easier to manipulate people. And yeah, those are all aspects of the user experience that, you know, it's going to be very specific to your product, but that's a starting point to say, okay, here's, here's where security impacts our users. I would say that's the very first step. And then the next step is to find the right people to talk to and ask them the right questions about the security user experience.
John: Yeah. I think you bring up a good point. I mean, I was just thinking as many people I know right now dealing with sort of scams through LinkedIn, for instance, where you wouldn't traditionally think of chat through LinkedIn as an issue you need to pay attention to as a UX person thinking about security, and it's clearly becoming a big issue.
So there's all of these, as you said, increasingly, um, problematic areas to deal with. And it's not going to present itself as cybersecurity necessarily, but it does impact user experience.
Heidi: It goes back to that word trust, right? Like who, who, who can you trust? What, what devices can you trust? What, um, services can you trust? What people can you trust? You know. all of those things.
John: So I want to switch gears a little bit. Um, you have a chapter called find the right people, ask the right questions. Um, like you, I place a lot of focus on cross disciplinary collaboration. So, um, I obviously think it's important, but I want to hear why you think it's so important.
Heidi: Yeah. Well, and I actually talked to John about cross disciplinary collaboration because he is the expert on this.
Um, But I'll, I'll, I'll give you my spin. So, one, probably one of the longest chapters in the book is, is called Find the Right People, Ask the Right Questions. And the reason I say that is because you cannot, and you should not, try to solve for the security user experience alone. You, you will mess things up.
So that is my big disclaimer. Don't go this alone. So you need to find your, what I call security UX allies. So that might be folks in legal, privacy, compliance, um, your security team, obviously, uh, maybe you have a dev sec ops team. Maybe you have security engineers, those folks, also customer success managers.
I mean, I create this, this infographic where there's like 12 different, and I can't even remember all 12 of them. There's, there's a lot of different people and it literally takes a village to improve the secure user experience. And the, the reason that I say that is because you are the expert-- so if you're a UX designer, you are the expert on your user.
You know what they will and won't do. You know what they understand and won't understand. So you, you bring a ton of value to the security user experience. However, what you don't know are some of the security threats that are specific to your product. And you need your legal team, you need your privacy team, you need your security team, you need your engineering team, all of those folks that I list in the book, you need them to be part of your security UX ally, you know, corporation, collaboration, in order to improve the security user experience.
So you, you only see one angle of it, you need to make sure that you have this 360 view.
John: Yeah, I think this is great advice for any kind of complex area or, or kind of system here. Lots of enterprise software, lots of developer experience stuff could really benefit from these sorts of things.
Heidi: Yeah. And when I talk about like, ask, ask the right questions.
There, one of the reasons that I wrote the book is that I didn't, I didn't feel empowered to ask the right questions because I felt overwhelmed and I felt like it didn't have the right vocabulary. I felt like, what do I know? The book helps with that. It helps give you kind of that foundation of the things that you should know.
But I also want to say like one of the key questions you can ask is from Adam Shostack's Threat Modeling Framework, and that is what can go wrong. Right? And designers do this already. So, you know, threat modeling sounds like this big, fancy word, right? Like, oh, I can't threat model, I'm not a security person.
You already do it, right? Like, you already ask what goes wrong as part of the user experience, as part of the design process. So, don't let that, you know, what intimidate you in some way because you're already doing it. You're just putting a security lens on it.
So, Adam Shostak's Threat Modeling Framework is, is four questions. What are we working on? What can go wrong? What are we going to do about it? And did we do a good job? It, and it's perfect. Like, you can't do better than those four questions.
So, I, I use that as kind of like an overarching framework. And then I say, you know, put this security UX lens on it and really think about: What is the threat actor going to be doing? What is the end user going to be doing? And how does the security user experience influence both of those actors? So, how is what you're putting in front of your end users going to influence them?
Are they going to be confused? Are they not going to know what the heck you're saying? Are they not going to know what to do next? Or are they going to say, Oh my gosh, shut up. You've given me this alert 20 other times before and I'm just sick of it. Um, and it also helps you think through that, that threat actor lens too.
Like how are they going to trick or manipulate our, our users? How, you know, being financially motivated, how, you know, what sort of things might they might happen that would influence the security user experience. So that when you're thinking about asking the right questions, like that's what I'm talking about.
John: Yeah. And I think it's, it's great. To have that starting point, because again, as you kind of alluded to here, there can be a confidence issue when you get into some of these areas, it can be a little scary, but sticking to the best practice UX best practices is going to help you no matter the fields, um, you just have to kind of put that security lens on it.
Heidi: Yeah. And what, so I, I, I gave John kudos earlier in the podcast episode, because what I think he's really good at and what some UX researchers are really, really good at in particular, not just UX researchers, but people in UX in general is facilitating those conversations. So coming up with ways, workshops, you know, different exercises that you can do with your cross disciplinary team to, you know, pose these questions and brainstorm around them and get everyone actively involved.
Like that is UX, that is a UX superpower. Um, and obviously it's something that John's really good at.
John: Thank you.
So, you know, you talked a lot about designing safer systems. What does that mean? And how can designers or other UX people help?
Heidi: Yeah, so I have this analogy that I use in the book. Um, John, you know that I like horses just a little bit. And. It struck me because I just know a lot about horses, how people have designed systems to keep both humans and horses safer.
And one of the things I realized was, Oh, like we, we have done this before. So humans are capable of creating safer systems. I know it because I've seen it. And what happens is people, you know, after living with horses for so many years have gotten, you know, gotten kind of smart. Horses don't, don't understand, you know, this human ecosystem that they live in.
And they probably think some of the things that we do are absolutely bananas. Right. But yet, you know, we have to operate within our ecosystem. And what people have gotten smart about doing is, is just knowing how horses behave and what they're just naturally inclined to do. They're big, they run into things, they get scared, you know, they run away that, you know, their first instinct is to run.
So people have created systems that, you know, remove clutter from the aisle and make sure that horses have a very clear path that they can walk through. Nothing sharp is, you know, is anywhere you always clean up. So, you know, that's one example. The way that you even tie a horse is a very specific knot that's called a quick release knot.
So just very simple, you know, almost seem like common sense sort of things, but the, they have designed the system in a way that takes into account how horses behave as opposed to being like, okay, horse, you just, you know, you know, here's our system, like you, you go learn it, right?
Like, it's just, it's just not going to happen. So I use that example, um, not because humans are horses, but because it is an example of designing a safer system. Like you, you are creating a safer ecosystem for which, you know, humans and horses can both be safe.
John: Yes. As someone who also grew up in horse country, I think it's a great analogy.
Heidi: Yeah, I think my point is you, you take the time to understand the horse and its behavior and you don't expect the horse to change its behavior. In fact, you're designing the system so they can just be who they are, which is what I'm hoping people get out of this book. You shouldn't be you shouldn't be expecting folks to change their behaviors.
You should be designing the system so that it is safer for everyone involved. So that, I mean, if that's one of, that is one of the key takeaways from the book, for sure.
John: So if you had to give people a couple just best practices, words of advice, what would you say?
Heidi: Yeah. So, you know, going back to the horse analogy, designing safer systems, you know, design out the hazard that's, that's part of, you know, human factors. And I don't, I didn't give a lot of prescriptive advice in the book because I think first of all, things are moving so fast.
Second, It's specific to advice is specific to your product and your users and a multitude of different factors, but just like as a few sort of tidbits for folks thinking about secure by default, which is something that, um, is a is a concept that's been kind of thrown around a lot because CISA the Center for Infrastructure Security and something issued some guidance on secure by design secure by default was one of the things that that included and secure by default means that it's, it's just secure out of the box, you don't have to do anything to make it secure. So like when you download and install Firefox, it's already protecting you from potentially malicious sites.
You don't have to check a box. It's just doing it naturally, which is probably what you're expecting anyway. So that's what secure by default is. Um, the other thing that I talk about in the book is like guiding the user along the safer path. So where they do have to make security and privacy decisions, how about you just help them along the way, right?
How about if there is a secure way of doing it, how about you guide the users along that same safer path and don't assume users know what that safe path is. And that is important for both-- You know, you, me, you know, our mom, our, our grandparents, but it's also important for technical people, which you might think, Oh, they know better. Don't assume that. Whoever it is, guide them along the safer path. Help them choose the right settings, you know, that are going to work for them in their environment, in their particular situation, don't just assume that they know. Guide them.
The other thing that I noticed, and I, I said at the beginning, content designers, like we really need you is that you really, really need to choose your words carefully. There's a ton of security terms that are thrown around a lot of acronyms. And I've been doing this for a long time and I still am like, what does that mean?
You know, and ChatGPT can you tell me what this means? You need to be really careful and understand, and this is where the UX people, you know, come into this. You need to understand what users know and what they don't know. And be able to, you know, present the content that is right for them. The other piece of this that you might not have thought about before is that you really need to be ahead of skepticism.
And what I mean by that is anything, anything that you put in front of users, you do it over and over and over again. What happens? John, you know this, right? Like they just ignore it, right? Eventually it's called habituation. So you have to account for that and design accordingly. If you can anticipate, oh, Alice is going to look at this and be like, why do I need to do this?
Why should I care about 2FA? Right? Like I have nothing to hide, just something we hear over and over again. Be one step ahead of that skepticism. If she's, if she's going to see the warning and say, not such a big deal, but it actually is, maybe you need to think about redesigning that warning or better yet, never having to service that warning because the system is safer.
Um, which leads me to my last point, which is about leaning on the technology. There's so much that the technology can do, especially with the introduction of AI that can help create safer systems. We're seeing this with, um, secure coding and helping developers surface and fix vulnerabilities. That is a way for the technology to actually make the system safer.
And we need to find opportunities to really lean on the technology to create these safer systems. Lean less on the end user, lean more on the technology and creating those safer systems.
John: So I slipped in a question.
Heidi: Oh boy.
John: Um, so what's a, what's a question you wish more people asked about human centered security and how would you answer it?
Heidi: I wish they asked: What would the user do in this situation? So if you, so I call in throughout the book, I call the end user, Alice, and give her a name. Um, I personify her. So what I say in the book is what, you know, basically like, what would Alice do in this situation? Given this set of instructions, given this warning, given this email message, what would she do?
And I think trying to anticipate, again, going back to Adam Shostack's "what could go wrong?" it's the same, you know, same flavor of the question, really thinking about, um, where she might get confused, where she might be skeptical, um, where she might be especially prone to manipulation. Are you building something that Alice is going to be in a frame of mind where she might be, especially, you know, maybe something terrible may have happened like I'm thinking about like healthcare situations, you know, stuff like that, a natural disaster occurred or, or something like that. Maybe Alice lost her job. Thinking about those moments and kind of putting yourself in Alice's shoes, I think. Really, I mean, it sounds really stupid. I, you know, and kind of rudimentary, but really, I, I really want people to put themselves in their end user shoes. And then I also want, um, UX designers specifically, this is going to come naturally to security people, but it doesn't necessarily come naturally to UX people is thinking about what, about what threat actors are doing and what threat actors are thinking and, and how that threat actor might take impact Alice, how they might manipulate her, how they might take advantage of her.
Um, you know, what new thing are they going to throw at her that now Alice has to figure out and how can the system help Alice in those situations?
John: Yeah. Speaking of those cyber security practitioners, you know, you've been focusing a lot on improving their experience and of course other technical users, um, you know, developers, um, IT admins, database admins, those folks.
Why focus here and, and how does it relate to your book and the research you did for your book?
Heidi: Yeah. So I think that security practitioners, IT admins, developers, folks who are building systems, folks who are building software, building technology, and also, you know, if you're thinking internally at their organizations have often have privileges that the rest of us don't have, right?
They kind of have the keys to the kingdom. They have a kind of a trickle down effect to end users in so many ways. So for me, focusing on those groups of people and making their lives easier. So making it easier for a security practitioner to do their job-- just to keep us all safe, keep their organization safe.
So make it easier for them to do their job. Make it easier for an engineer to do their job. Security usually is not their primary focus. And some engineers just hate security, right? How can I make it so that they don't have to worry about security so much? That's just baked into their process, baked into their system, and they can do what they set out to do.
Same with folks who are in IT, who again, might not be security focused. They just want to make sure that things are running smoothly and people can do what they're supposed to do. So these folks make mistakes, right? Um, they also can be manipulated by threat actors. So, you know, making sure that they, they are enabled and empowered to, you know, keep those kings of keys of the kingdom safe and also facilitating them and helping, um, make their lives easier. So again, I think it has a ripple effect or like a, I don't know, I'm, I'm using this, this weird, like trickle down, you know, effect to everyone else. If they can build safer systems, if they can build more secure products and software, that is going to make us, you, me, you know, us regular folks, that's going to make us safer.
John: Yeah. It kind of reminds me of our conversation with the CISOs about...
Heidi: I love how they're just like "the CISOs." He's talking about Matt Stemper, Gary Hayslip and Bill Bonney. And we're just like "the CISOs," you know, like the three, the three of you guys.
John: The three CISOs.
Heidi: They're amazing, by the way, they were on a couple episodes ago. Yeah.
John: Yeah. And I think an important thing that they have to deal with in their job is connections, like data, connecting to other data, different products, integrating together. These are all things that they're worried about and dealing with all the time.
And so if the developer tools are built with security in mind, that means the CISOs have less to deal with or worry about, um, to your trickle down or, um, kind of thought process there. Yeah. So finally, uh, where can people...
Heidi: So where can people learn more? So, um, human centered security is available for sale on the Rosenfeld media website. And I'll link to that in the show notes. And then I write on LinkedIn. So I have a newsletter on LinkedIn. I post a lot on LinkedIn.
Um, and you know, folks subscribe to the podcast, give us a follow, and then I'll give you one more flash of the book. We'll look at the Oh, so nice. So again, that's available on the Rosenfeld Media homepage.
John: Awesome. Looking forward to my copy and, uh, yeah, buy your book.
Heidi: Buy the book. Thanks, John.