RopesTalk

On this episode of Culture & Compliance Chronicles, Amanda Raad and Nitish Upadhyaya from Ropes & Gray’s Insights Lab, and Richard Bistrong of Front-Line Anti-Bribery, are joined by neuropsychologist Sarah Zheng to explore the human factor in cybersecurity. The conversation delves into the psychology behind hacking, the role of emotions and context in falling for scams, and the evolving risks posed by emerging technologies like AI and brain-computer interfaces. Sarah shares insights from her research at the Dawes Centre for Future Crime and her book, The Psychology of Cybersecurity, highlighting the importance of operational resilience and creative approaches to cyber awareness. Listeners will learn practical strategies for building a culture of psychological safety and reporting, as well as actionable steps to enhance organizational cyber resilience. 

What is RopesTalk?

Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.

Nitish Upadhyaya: Welcome back to the Culture & Compliance Chronicles, the podcast that gives you new perspectives on legal, compliance and regulatory challenges faced by organizations and individuals worldwide. The clue is in the title—culture is at the heart of everything. It’s the endlessly shifting patterns that govern our environment and behaviors. The magic is in amplifying certain patterns and dampening others. Let’s see if we can pique your curiosity, get you to challenge some of your perceptions and give you space to think differently about some of your own challenges. I’m Nitish Upadhyaya, and I’m joined by Amanda Raad and Richard Bistrong. Hello, Amanda and Richard. I am very excited to be recording today. Richard, who did we have last time?

Richard Bistrong: We had an interesting and exciting interview with Klaus Moosmayer on an integrated approach to risk and compliance. And today, shifting lanes, I’m so excited to have Sarah Zheng join us and talking about cybersecurity, among other issues.

[1:05] Getting to Know Sarah

Nitish Upadhyaya: Sarah is a neuropsychologist investigating emerging crime. Think of that as a topic for research—emerging crime from technological advances. Through her research, she helps organizations become more resilient to phishing attacks and improve awareness to new cybersecurity threats. She’s worked in data science—so she’s actually done the doing—in corporate institutions, financial institutions, and government bodies; and her academic research is based at the Dawes Centre for Future Crime at the University College London. She recently co-authored a book titled The Psychology of Cybersecurity, and that’s how I came to know Sarah. So, we are really excited to get into this topic that so many people have been asking about: What is the human factor and the psychology behind cybersecurity? Now, before we dive in, I want to help our listeners get to know Sarah a little bit better, so let’s do our rapid fire round. Give us three things that everyone should know about you.

Sarah Zheng: Thank you so much for the kind introductions. First of all, I’m fascinated by how emerging technologies are impacting society because I think that brings up a lot of interesting questions about how we view ourselves as human beings. The second one, I make a really good beef ragu. And the third one is I have a Colombian alter ego. I was there just a few months ago for a wedding, and my friend said because I danced salsa so happily, my alter ego is Sarah Cruz.

Nitish Upadhyaya: I love it. Well, we are all foodies on this podcast and big avid travelers as well, so we hear you on so many of those things about yourself. Now, turning to life more generally, what’s one thing that you’re curious about?

Sarah Zheng: I recently started a new project, which is about the crime and security implications of brain-computer interfaces. This is really a wholesome moment because I did psychology and neurosciences in my studies, then went into IT and data science, etc., and now, these two things are coming together. They brought up so many interesting questions about, for instance: How much enhancement do we actually want as human beings with these technologies? Are we really going to a future that is transhumanist, as Sam Altman is saying? It’s something I’m really curious about.

Nitish Upadhyaya: We look forward to hearing more about that side. And because all of our guests on this podcast are human, they can be surprised. What is the last thing that surprised you?

Sarah Zheng: Someone sent me this mini documentary recently about a huge scam. So, there were three young boys actually—one teenager, 19 years old; and two early 20s—who managed to steal about $214 million worth of Bitcoin from a wealthy person based in the U.S. After they managed to scam this person for so much money, they went out on social media to flaunt their lavish lifestyle, using up all that money, and that’s actually how they eventually got caught. So, it’s a good thing they got arrested, but I think it’s baffling to me that people that are so young are able to conduct these really sophisticated scams and people actually are falling for it. I think this really emphasizes why we need to be talking about cybersecurity more.

Nitish Upadhyaya: That’s a perfect segue into understanding more about human behavior. So, what is the role of psychology in cybersecurity, whether that’s the psychology of a teenager or a lifetime hacker?

[4:25] Role of Psychology in Cybersecurity

Sarah Zheng: I think psychology plays such a key role in cybersecurity because, especially when we started writing this book, it really started with the question, “Why do we need cybersecurity in the first place?” And it’s really because we have people out there that are trying to break into organizations all the time, every day, and you can ask yourself, “Why is that the case?” So, of course, there is monetary incentive because there is obviously a lot of money to be gathered from ransomware attacks. We’ve seen many examples just the last year, but there is much more to it. When we started looking into, “Who are these people that become hackers eventually in their lifetime?” it tends to be teenage boys for the majority of cases. There are psychological factors that are driving them to become hackers because maybe they are seeking the thrill of breaking into a system because it’s a cure for the boredom they might be feeling in their day-to-day lives. There might also be a sense of loneliness in their real life where they might not feel part of a community in their direct social environments and schools. And so, when you go on your online hacker forums, let’s say, there’s a huge community out there of people who might be feeling the same way, and they say, “Well, let’s do this badass stuff and try to break into these systems.” That is how it starts.

I think it’s really important for us to also pay attention to how can we prevent young people from going on a nefarious pathway, because hacking isn’t necessarily just bad—we know in cybersecurity practices and red teaming exercises, for instance, we employ people all the time to try to break into our systems to find vulnerabilities that we can then help solve. But for someone to go out and say, “Well, I’m going to be a professional criminal hacker,” that’s something else. Apart from ruining organizations, they’re actually also ruining their own lives when they do get caught, and we saw that with one of the people that we interviewed actually in our book who got caught for a major cyber attack—they decided then to become one of the good guys, let’s say, to become an ethical hacker. This really goes to show that there’s a lot of psychology behind why people create the need for cybersecurity, and that’s why the first part of the book is dedicated exactly to hackers.

Nitish Upadhyaya: Taking the flip side of that—we talked a little bit about the psychology of the hackers—human error tends to be a real issue in cybersecurity, so now, we’re thinking about the people, the organizations that are being hacked. Can you give us a neuroscientist’s view on this about how our brains construct information and our world view and how that might lead to risk?

[7:05] The Way Our Brains Work

Sarah Zheng: The way to think about the brain is that it creates meaning based on context. And so, when you receive an email at work, for instance, the first thing that you’re doing when you’re looking at that email is, “Does this fit within my job role?” Let’s say it comes from someone pretending to be your boss, “Is this something that my boss could actually ask me in my normal day-to-day work?” And if the answer is “yes,” then you’re more likely to comply with whatever that email is asking from you, whether it’s actually your boss or not, but if the context of that email is not aligned with your work, then people start thinking, “It’s a bit suspicious.” And then, they might start looking into technical security indicators like the actual email address of the sender, or maybe there’s a link that they can check out in the email. But most of the cases, even when people think, “This is a little bit suspicious,” they won’t be checking those actual security indicators because we’re so wired to just make a judgment based on context. And so, there are social engineering attacks that exactly try to use persuasion techniques—for instance, those six principles from Cialdini—to get people to believe that the context portrayed in the email is actually legit.

Amanda Raad: I always try to think about the various ways that emotions come along with, frankly, the pace with which we live, and in this space in particular. Some of the most successful attacks that I have seen or almost fallen prey to myself are ones where I’m in a rush and I’m trying to do a whole bunch of things at once, but also, it sparks some kind of fear or emotional response in me—something has been messed up, or something is threatening in some way. Can you comment about the human emotions/speed with which we live, separation from each other, and how those work together?

Sarah Zheng: I think you’re so spot on to say especially when you’re in a rush or when you’re feeling stressed, essentially, it’s much easier to fall for something because you might not be 100% focused on the actual content of the message or certain security indicators that are very easy to overlook—for instance, the actual sender’s email address, or maybe a logo that’s a bit off from what it should look like. About emotions, it’s exactly the thing that social engineers try to exploit. They are trying to make you feel afraid that if you don’t comply with, let’s say, paying a ransom, then they’re going to destroy all of your systems. If they are able to make you feel this sense of urgency—you need to act right now—it’s much easier to fall for it because you’re thinking, “There must be such a huge risk of not acting right now.” And I think it takes a lot of self-awareness actually to go beyond that initial emotional response and to think, “Wait a minute. Is this actually the person that I think it is?”

Amanda Raad: Following up on that, there’s the ransom part, but even the first place, where you might accidentally click on the link that you shouldn’t click on because you’re moving too fast and you don’t recognize it, but because they suggest something threatening to you, and so, out of curiosity or just feeling like you can control something, you might dive into that quicker. Is that purposeful as well?

Sarah Zheng: Yes, absolutely. So, in some of our studies, we’re investigating why might people repeatedly fall for phishing simulation campaigns that are often run within organizations. We find that people click for many different reasons, and one of them is that they’re curious just to find out, “Is this actually potentially a phishing website? I should check it out and actually go there.” And that’s why in studies we are also saying, “You’re only compromised if you’re actually entering your credentials on such-and-such website,” because just clicking on the link shouldn’t immediately lead to a compromised account or compromised system in whatever way. I can’t name the company, but last year, we saw this huge attack on a major British retailer which started with a phishing email. And so, this person clicked on the email and actually then entered the credentials, possibly because the website was made to look exactly like the typical login page for the company, so it’s much easier to believe, “Oh, well, I went here to check if this is legit, and it looks legit. Therefore, I entered my credentials.”

[11:45] Tips for Recognizing and Avoiding Issues

Richard Bistrong: So, Sarah, how do we keep up with all of this? I was reading an article about establishing a safe word with family members because now with AI we can have technology—and this happened to a friend of mine—where someone representing a relative called and said, “I’m in the hospital. Can you please wire money because I’m not covered by insurance? I’m not in my network.” Another good friend of mine was telling me this story, Sarah, about someone hacked into his system and then he gets a call from the FBI saying, “Well, what we’re going to do is we’re going to catch him, and we’re going to pay the ransomware, so wire me the money for the ransomware.” And I’m like, “Time out. The only time you’re going to hear from the FBI is when they knock at your door and a case agent introduces himself with his or her card. You’re being double scammed here.” From an emotional, psychological perspective, how do we keep up with all the evolving changes in social engineering in this? Is this even a solvable problem?

Sarah Zheng: I’m sorry that happened to the people you know, but I love that idea of having a safe word because I think a lot of the social norms that we have in our physical reality are not really translated to how we communicate over digital means. So, one of the ideas that we also describe in the book is about digital norms, and something as simple as having a safe word that you can verify another trusted person with could be very effective, I think. Dealing with those emotions is another, I guess, under-researched topic, because we know people, apart from suffering financially, there’s also psychological safety that is somehow disrupted because they fell for a fraud or a scam. A very bad example here is romance scams where people already feel lonely and maybe depressed because of that, and then they get tricked into believing they found a lover and get exploited financially as well. It’s horrendous when you read those stories. To be fair, I’m not entirely sure what is the best possible way to completely prevent these things, but what does seem a common thread through all scams and frauds is that you’re being asked something out of the ordinary, something that was unsolicited, and you have to always do your due diligence, whatever the request is, even if they pretend to be a friend or relative—that’s probably the best advice I could give.

Richard Bistrong: Another example—and this happened to me, and it was just not that long ago—where I’m calling up my pharmaceutical provider and somehow or another that phone number got diverted. It was really like a normal call in terms of getting a prescription filled, and right at the end, Sarah, it was, “Oh, and we need your checking account information because you have a credit due to you.” I’m like, “Time out, and we’ll see you later.” But it was just at the end, a little question, and had I not been just so self-aware of what’s going on, it would have easily slipped out to have provided that information. So, well said: we just have to keep our radar on in how these things are being socially engineered.

Sarah Zheng: Absolutely. And that sounds like a very insidious way of trying to exploit people. I think these AI voice cloning techniques are becoming so good, and especially if there are somehow private conversations between you and friends or family out there on the internet, then it becomes easier for an attacker to imitate those kinds of languages that you tend to use. Yes, that’s absolutely something to be very aware of. Anything that asks for your personal details or makes requests out of the ordinary, you shouldn’t right away trust it and always ask through the official channel. So, if someone pretends to be from, let’s say, your bank or the police, always check by calling back the official number that you know from the police or the bank, and always check with someone else in your physical environment, “Is this legit?”

[16:15] Social Engineering

Nitish Upadhyaya: Can you talk to us a little bit more about social engineering and how that might work, especially for organizations where people get diverted from doing the things that they would want to be doing in the moment? I think we’re going swiftly from Nigerian prince, “You’ve won some money. Can you wire a little bit, and we’ll give you the rest?” to something that is, as you describe, really sophisticated, maybe cloning your CEO’s voice, an email. What are you seeing happen, and what should people be aware of from a social engineering perspective rather than these callbacks and things that we’ve just been discussing?

Sarah Zheng: I think with every new emerging technology we’re seeing that criminals act opportunistically. So, with AI, we’re seeing more sophisticated types of social engineering attacks. The one that springs to my mind usually is the one, I think two years ago, with a British engineering company that also has a branch in Hong Kong, and the colleague in Hong Kong thought they were getting an email and Zoom invite with their CFO based in England and got scammed for about 25 million pounds. The actual video call was with a deepfake version of the CFO in this case, and the person was made to believe that that was actually their CFO asking them to do this transfer. And I think these kinds of situations, even though they sound outrageous right now, might become more common in the future.

Another project that I recently worked on is on social robots—so, physically moving robots that can interact with you and I through AI essentially. We started looking into how criminals can exploit these kinds of, let’s say, humanoid robots in the future where, for instance, they might be used by the police in public spaces to patrol, or suspicious activity in airports or on the streets. But also, we see the rise of AI agents in many different business contexts, so, instead of talking to an actual person in customer services, you’re tending to talk to an AI agent. Let’s then imagine how a hacker might be able to exploit those kinds of social engineering practices through a robot or even through an AI agent that gets hacked and maybe they’re able to fully automate the hacking process. With every new technology, we need to be very attentive to how criminals could potentially misuse those things so that policymakers, regulators, and law enforcement are actually prepared for those scenarios. And so, a lot of the work that I do in the Dawes Center for Future Crime is exactly about this—we’re trying to anticipate these risks and inform policymakers what to do. Ideally, we would also work more with industry, but we sometimes find that they’re a little bit apprehensive when it comes to anticipating such risks because they don’t want to push more attention towards these kinds of situations. But I do think we need to tackle them head on.

[19:15] What Organizations Should Be Asking Themselves

Amanda Raad: Well, that’s a perfect transition, Sarah, because I was just going to say let’s focus on organizations for just a second. What are some of the questions that you think organizations should really be asking themselves so that they’re focused on operational resilience instead of just thinking about prevention?

Sarah Zheng: That is a hard one because it really prompts organizations, I think, to dig deep and really reflect on themselves, “What does human risk look like in our organization?” for instance. “Where do we see vulnerabilities? Are there certain departments or job roles that tend to be targeted more? And are we doing enough to help those people be safe and secure, or are we leaving some gaps in terms of awareness or education?” I think a lot of the cybersecurity education and training that organizations implement, for instance, for compliance reasons tend to be like a cookie-cutter format: it’s the same training for all employees that you do once a year in 30 minutes online—that’s it. But is that really effective? I think we can start asking those questions first. Maybe instead of these kinds of online trainings, you could consider nudges that you can put within the interfaces in which people communicate—for instance, in Teams, in, let’s say, your email interface, or even on people’s phones if they are required to make phone calls for work. There is a list of 25 situational crime prevention methods that police have been implementing for the last few decades to prevent traditional types of crime, but I think we need to start using those tactics as well in digital spaces much more. And so, in the book, also we talk about how organizations might use them in more creative ways to create other types of guardrails that might not exist yet. So, there’s also really an invitation to organizations to think outside of the box and to say, “Well, apart from our cyber awareness campaigns, can we do something else? Can we do something that’s more engaging, etc.?”

Amanda Raad: I’m glad you brought up the cookie-cutter danger and opportunity to think differently about this. We spend a lot of time on this podcast talking about all kinds of different risks and vulnerabilities within organizations and the need to look within each organization at how those present themselves uniquely to that organization with the people that make up that organization, and I think this is no different. You really need to be able to bring both the prevention lens and the how-do-you-focus-on-operational-resilience lens to the individual situation and go deeper than that higher-level, surface-level, cookie-cutter approach, so, I’m really glad that you called that out. Just one follow-up on that: I always worry about people going it alone when they might be embarrassed that they have fallen for a scam. Do you have thoughts on how organizations might try to get people to not waste precious seconds in those maybe, perhaps, defining moments?

Sarah Zheng: Yes, that’s a really good point because it’s this sense of shame when you fall for a scam or fraud that makes people not want to talk about it, but for many different reasons, we should encourage people to report these things. And I think the best way to do that is through creating a culture within your organization in which people feel safe enough to speak up on these things. It has to do a lot with this sense of blameless culture that we find actually in cybersecurity teams. Some of them are implementing what we call this blameless post-mortem analysis after a cyber incident: “What can we learn from what went wrong?” instead of pinpointing who was the culprit. And I think the same has to happen across the organization, where if a person suspects they may have entered their credentials on a malicious website, they should know who they can talk to: Is it their manager? Is it the IT service desk?

Richard Bistrong: Sarah, what about when we think about how we communicate? We’ve got our laptops. We have our phones. We have our tablets. And I read an article where they did a behavioral study and they said we are more aware and sensitive to being scammed on our phones, and will resist, than we are at our computers, where we’re less sensitive and aware—which, to me, was almost counterintuitive. So, any thoughts from your perspective and your research in terms of the relationship between our hardware and how we’re communicating and the risks that are involved in those communications?

Sarah Zheng: That’s really interesting. I have not personally done any studies to compare the effect of devices on, let’s say, scam vulnerability. In a way, you can’t really blame people for being curious to what a website is about, and so, I think instead of just focusing on reducing click rates in organizations, we should really be more worried about actually giving away personal details. Organizations should look beyond click rates and look at actual malicious behaviors: for instance, data exfiltration, copying data onto an external hard drive or somewhere else that you don’t want it to go, entering actual credentials, giving away personal or confidential information about your organization—these kinds of things are the actual behavioral indicators we should be looking out for.

[24:50] Going Beyond “Raising Awareness” to Protect Your Organization and People

Nitish Upadhyaya: One of the things I found fascinating in the book was your summary that everyone has an intuitive understanding that cybersecurity’s important—we get that. Everyone knows that hacking happens, people have stories—friends of friends or themselves who have become victims—or they’ve read something in the papers. But we primarily care, you say, about getting the job done. On a day-to-day basis, we are here to be a lawyer, a doctor, an accountant, whatever it might be, and the security behavior is almost a secondary aspect of the job. You said that if your goal is to get the organization to act more securely, then you need to create the circumstances for them to do so. And you’ve alluded to a little bit from a training aspect. You’ve alluded to a point in time as you’re about to send something, you get a message popping up. But what does that really look like in terms of actionable takeaways for our listeners? How do you create a cyber secure, aware, and actionable organization?

Sarah Zheng: I think this is one of the assumptions that organizations tend to make, is if we invest enough in awareness, people know about the risks, then automatically, their cyber behavior will be more secure. I think the number of cyber incidents we’ve seen over the years show that that’s not the case. So, instead of just focusing on making people aware of risks—like you said, many people already know of them through stories and many people who have already fallen for some sort of scam—we should think outside of the box and look at, for instance, those situational crime prevention methods, “How can we apply them in our organization?”

You mentioned adversarial training, which is a different approach to education about cybersecurity, where we’re not teaching people, “These are the things you need to be avoiding. These are the things you need to look out for.” We’re actually, in this case, allowing people to think like a cyber criminal. So, in this training, we essentially showed people a video of a hacker talking about how they would devise a sophisticated phishing attack on a specific individual from a specific organization using all kinds of data they harvested from social media, and then we gave people three different scenarios to create their own highly targeted phishing attacks. The underlying idea is that if people are able to empathize with how the cyber criminal thinks, maybe it will occur to them to look out for certain tactics and other technical indicators that those cyber criminals will fake. After going through this adversarial training method, we sent those people a simulated phishing campaign two weeks later and compared the click rates to a group that had no training or a conventional type of training where we tell people what to do and what not to do. Although it was not statistically significant due to the small sample sizes, we did find that the adversarial training group clicked almost three times less than the group that had a conventional type of training. So, we would love to test this further actually with other organizations. We’re talking to a few quite large companies that are interested because this one was very much in an academic institute with a relatively small sample size, and we would love to see: Does it translate to other types of businesses? Does it work in other regions? So, that’s another example of how to think outside of the box.

Also, about digital norms, I think organizations should really ask themselves, “How do we want people to respond to unsolicited requests? How do we want people to deal with, let’s say, something urgent that your boss allegedly wants from you?” These are, I suppose, more long-term things that actually are not costly at all—you can do that right now. It doesn’t require any sophisticated technology because this is really about how you want people to form this organization or culture with each other. How do we build that through clear digital and physical norms? How do you build that through other kinds of engagements, through other kinds of trainings, etc.?

[29:10] Key Takeaways

Nitish Upadhyaya: That’s a great series of lessons for people to take back. I’m going to come to you, Sarah, for one silver-bullet piece of advice for listeners in a minute, but I’m going to start with Richard. What’s your takeaway?

Richard Bistrong: Well, two. First, as soon as we’re done with our podcast, I’m going over to Amazon to order Sarah’s book because this is such a fascinating topic. But one thing that I wouldn’t have tied into this conversation, but it now makes me appreciate it, is the relationship between psychological safety and not feeling the shame in saying, “This is what happened.” And I go back to this conversation with this friend of mine who got double scammed. My heart broke. He was so embarrassed to even share this story with me, and I’m like, “Hey, let’s calm down. Here’s what you need to do. You need to call your attorney, etc.” But I can see people trying to get deeper into the wrong rabbit hole trying to cover up. So, don’t be embarrassed to share if you’re concerned about something or if you did something that we need to look at.

Nitish Upadhyaya: And, Amanda, what about you?

Amanda Raad: Yes, I also have two. One was exactly what Richard just said. And I loved the blameless exercise without attributing any blame so that you can safely explore after an incident what has happened—I think that’s really important. I also was thinking of some of our previous podcasts where we’ve talked about making space for people to make the decisions they’re trying to make and making a culture where people are able to slow themselves down a little bit. And so, I was thinking of that quite a lot as we were talking, of just making sure that there’s enough space to be as safe as you can be in the world that we all are living in with more and more technology no doubt coming, things moving faster and faster, and the need to resist against that a little bit. What about you, Nitish?

Nitish Upadhyaya: I think, for me, it’s this idea that awareness is not enough—peep scammers and hackers are going to get creative with their techniques. The question is: why are we not getting more creative? Why are organizations not getting more creative with the way that they train people? And your example of adversarial training, getting people into the minds of hackers, and making it fun and engaging—something they’ll tell their partners about and their families about, and various other bits and pieces—that really plumb into their brains. When something’s wrong, how might someone be coming for you? And to build that intuition and the street smarts, I really like that creative approach. So, we have our takeaways, but, Sarah, before we go, do you have any final advice for our listeners?

Sarah Zheng: Yes. I would invite people to take a moment to reflect. And I would say, take a sheet of paper and make three columns. In the first column, you say what cyber instance or attack types are you most afraid of for this year. In the second column, you can list is your organization ready to deal with these kinds of incidents—as in, what kind of measures have you already put in place to deal with those attack types you’re most concerned with? And then, in the third column, list out what or what else can you be doing right now to become more resilient to those things.

Nitish Upadhyaya: Amazing. I suspect lots of people are going to be wanting to find out more about you and your work. Where can they do that?

Sarah Zheng: You can find me on LinkedIn—it’s Sarah Y. Zheng. Or my website, you can reach me directly through there—it’s syzheng.com.

Nitish Upadhyaya: Thank you so much for sharing your stories, your experiences, your research, and, most importantly, your top tips for getting people screwed on and making sure that they are thinking about this stuff on a day-to-day basis, because it is out there, and someone somewhere is going to end up experiencing it. Hopefully, we’ve caught a few more of those and raised awareness and changed action patterns along the way. So, thank you so much for your time, and we look forward to seeing more about your work in years to come.

Sarah Zheng: Thank you so much as well.

Nitish Upadhyaya: Thank you all for tuning in to the latest episode in our Culture & Compliance Chronicles series. For more information about our series and any of the ideas discussed today, take a look at the links in our show notes. You can also subscribe to the series wherever you regularly listen to podcasts, including on Apple and Spotify. Amanda, Richard and I will be back very soon for our next chapter. If you have topics you’d like us to cover or novel perspectives you want everyone else to hear about, get in touch. Thanks again for listening. Have a wonderful day and stay curious.