There Has to Be a Better Way?

On this episode of There Has to Be a Better Way?, co-hosts Zach Coseglia and Hui Chen interview Nitish Upadhyaya, director of behavioral insights at R&G Insights Lab. Nitish is a lawyer and behavioral scientist whose research has focused on how humans interact with bots. Based on Nitish’s findings, the three discuss how technology might complement human strengths in the ethics and compliance world. They also touch on human-centered design and Nitish’s revolutionary way of thinking about effective training programs.

What is There Has to Be a Better Way??

A Ropes & Gray (RopesTalk) podcast series from the R&G Insights Lab that is a curiosity-driven hunt for good ideas and better ways to tackle organizational challenges.

Zach Coseglia: Welcome back to the Better Way? podcast, brought to you by R&G Insights Lab. This is a curiosity podcast, for those who find themselves asking, “There has to be a better way, right?” There just has to be. I’m Zach Coseglia, the co-founder of R&G Insights Lab, and I’m here, as always, with my friend, colleague and collaborator, Hui Chen.

Hui Chen: Hi, Zach—glad to be back.

Zach Coseglia: Glad to be back too. I am really excited about today’s podcast because it’s the last of the R&G Insights Lab crew. We are joined today by Nitish Upadhyaya. Nitish, welcome.

Nitish Upadhyaya: Hello—thanks for having me.

Zach Coseglia: Thank you. Thank you for being patient too, because we went through the group pretty quickly, but then had a long list of external guests. We are so happy to close the loop and have you with us today.

Nitish Upadhyaya: I’ll try and make it worth the wait.

Zach Coseglia: Already has been. We like to start, Nitish, as you know, by getting to know our guests just a little bit better professionally, and then at the end, we get to know them a little bit better personally. So, I’ll ask you the existential question that I ask all of our guests at the outset: Who is Nitish?

Nitish Upadhyaya: I started my career as a lawyer. I investigated bankers and corporates, insider dealing, market manipulation, both on the investigation side, but also helping them with remediation. “What do we do next? How do we make sure this never happens again?” Very much from a legal angle. And I was lucky enough to work across multiple jurisdictions, including the U.S. and Europe. I then decided that actually what I really was interested in was improving the business of law. So, I led an innovation team, thinking about how legal technology could really revolutionize how lawyers work, giving them a chance to do the legal work, and bringing in outside skills—user-centered design, data science—into the world of legal. I then moved across to my first love, which is human behavior. Why do people do what they do? Why do they say one thing? Why do they do something different? Both in my user-centered design work and in my legal work, I found that often that was the missing puzzle in what people were trying to do. I have a master’s in behavioral science, and now, I’m lucky to work with you folks as director of behavioral insights at R&G Insights Lab.

Zach Coseglia: We’ve had several behavioral and social scientists on the podcast to this point, including our colleague, Dr. Caitlin Handron, who, as you know well, and our listeners at this point probably know well, is a behavioral scientist, but identifies as a cultural psychologist. We’ve also had and talked to, or about, behavioral economists and anthropologists. I’d like to hear how you identify within the broad umbrella of behavioral science.

Nitish Upadhyaya: I am interested in the intersection between human behavior and technology. How that is going to change the way in which we work, and how that will affect what we need to know, what we need to be trained on, and how we approach the world of work and business going forward. So, that’s one of the behavioral elements that really interests me. I’m also fascinated about the science of complex systems. How do these complex ecosystems in which we live, work and play really manifest, and how do we start to make them change for the better?

Zach Coseglia: Let’s talk about your first interest first. I actually want to introduce folks to your dissertation, a very cool topic that you wrote about at the London School of Economics, which was, Are Certain Personality Types More Likely to Cooperate with Machines? Are they?

Nitish Upadhyaya: It depends. That’s my lawyerly answer. Putting my behavioral scientist hat on, it’s a complicated issue. The exciting news is that we just got published, so it’s gone from a dissertation into a live paper.

Zach Coseglia: I know that everyone’s initial thought when I said we were going to talk about your dissertation was, “I want to read that.” So, now they can.

Nitish Upadhyaya: Here’s a simpler version for you all. Think about the wider picture, the technology and our interaction with it, the point that I talked about earlier—it’s exponential. We’re talking about e-commerce, trading in financial services, or the initial diagnosis of ailment in the health care context. Some are going as far as proclaiming the dawn of the fourth industrial revolution. That’s one piece. Now, economists, behavioral or otherwise, have been studying interactions between humans for decades. One of the things that drives economic growth, social cohesion, and generally our day-to-day lives is cooperation between humans, and in particular, reciprocity—"I do something for you, you do something for me.” Generally, studies show that people reciprocate. Now, let’s connect those two topics. What happens if a bot does something for you rather than a human doing something for you? It might give you guidance or it might give you advice. It might help you make a deal. What do humans do when they’re given the chance to take advantage of that bot’s initial act?

My co-author, Matteo Galizzi, and I asked the question, “In bot we trust?” And that’s the snazzy title for the paper, by the way. We asked bots and humans to play something known as the trust game. Economists have invented all of these funny little scenarios and games, which reveal what people do in certain situations. In this game, you’ve got two players and they play in sequence, so the first player doesn’t know what the second will do, and the second player is very much responding to the move of the first player. The first player is given a pot—let’s say $60. They can decide to split that pot evenly, $30 for themselves, $30 for the second player, and the second player doesn’t get a turn. That’s it—they both walk away with $30. Or they can hand over control of the game to the second player and give themselves a chance of earning more. So, let’s say the first player is trusting and hands over that control—the second player now has two options. The experimenter increases the size of the pot, and they can either reciprocate and split that newly increased pot evenly and say they both go away with $70—remember, they got that opportunity because the first player trusted them and passed the baton over—or the second player can grab, let’s say, $100 for themselves and leave player one with nothing. That’s obviously the best economic outcome—they get $100 rather than a $30 or a $70, and what that second player does is a measure of reciprocity. We played these games with humans first and then bots—humans playing as the first player and the second player, and bots playing as the first player and humans playing as the second player. Now, recent studies suggest that the humans showed lower levels of reciprocity when interacting with bots compared to humans. So, the first part of our research was to test whether that held true. Spoiler alert: It does. Humans reciprocated less with bot players than human ones. Then, the novel part of our research was asking whether a player’s personality traits affect their decision to reciprocate. Ultimately, we found that players who exhibited higher levels of honesty, humility, and to a lesser extent agreeableness reciprocated more, and that applied whether they were dealing with human counterparts or bot counterparts.

Hui Chen: Do you see that trust relationship evolving as technology evolves?

Nitish Upadhyaya: Absolutely. I think we’re still very much at the start of this relationship. People are seeing what technology can do. And with the advent of ChatGPT—I know you’ve talked about that with a number of your guests—people who are not technologists are finally getting their hands on some of this supportive material that they can use to write blog posts, or create whole training seminars or talks. That relationship is going to change. We have gone from the Terminator view to adapting to, “What can we do together? What can we be released from as human beings? What should we be released from as human beings to do some of the more clever thinking or reflective, emotive work supported by AI?” I think that is also particularly interesting in the compliance world. What happens if compliance departments start with compliance by bot? “I have a question. I don’t have enough compliance officers to cover it, but I’m going to put it into a model.” Say it’s ChatGPT for compliance or a chat bot—my interesting question is: Is the advice or guidance that you get from a bot treated differently than if it comes from a human compliance officer, and what impact does that have for risk in an organization? That’s the real-world issue. I think things are changing and people are becoming more familiar with this, but it’s an open question.

Zach Coseglia: What do you attribute the differences to? Is it about trust? Are we naturally more trusting of each other? Which to me, that actually sounds far-flung at times, given a lot of the work that we do in culture and around interpersonal relationships. Is it trust? Is it comfort? Is it insecurity? What do you attribute it to?

Nitish Upadhyaya: There’s been some research in this area. Factors include emotional and social concerns, questions around where the money is going that the bot is getting (in the context of the game at least), cultural backgrounds, what information is provided, and also, how the game itself is structured, because this is in one sense quite an artificial construct helping to bring out what might happen in the real world. In our study, we looked at factors like age, experience with bots, and how religious people said they were, but none of this really mattered. Ultimately, personality types that reciprocated more with humans also reciprocated more with bots. And generally, people reciprocated less with bots than with human beings. Attitudes towards AI, which might be suspected to be something that was relevant, wasn’t.

Zach Coseglia: You said that folks who were naturally more trusting of people would be naturally more trusting of bots, and I assume that that means that people who are naturally less trusting of people would be naturally less trusting of bots. Anything that we can glean from this early research around how we can position these tools in, for example, the ethics and compliance space, or in an organizational context in ways that create comfort and create a level of trust? Otherwise, it seems like it’s very much driven by the person interacting with it, as opposed to what we can actually do to the thing itself.

Nitish Upadhyaya: I think early on, the indications are that you could prime elements of agreeableness, honesty and humility with people, thinking about what activates those receptors and people’s understanding of what the bot is actually trying to do. Understanding why the bot is interacting and where it comes from can make a big difference, and how that interaction is portrayed with the individual can really dramatically, I think, change how that individual perceives what is coming to it. And it’s the same as trusting a compliance officer. Someone who you know who’s been on the desk for a long time, who knows the business, versus maybe a compliance officer that’s been in the post six months, who you don’t have a very good relationship with, who you’re not sure really understands your problem—you might approach those people in a very different way. I think if we’re looking for bots to support our compliance efforts and our ethics efforts, we need to think about giving people the right context about where they come from, where the information comes from, and how they are trying to assist to help people make a better decision about what they say to the individual, how they query it, how open they are with the individual, and ultimately, how much they trust what that bot is telling them to do or not do, and what decision they make.

Zach Coseglia: This was something that came up in a conversation we were having with a client just the other day. For background, we were talking to a client about future casting with respect to technology and innovation, and its impact on the legal and compliance spaces. What we wound up talking about was the role of the human—the amplified or different role of the human, and of human intelligence in a world where there’s increasingly more impressive artificial capabilities. Hui, I know that you have a view on this, but what is the changing role of the human as technology continues to impress?

Hui Chen: Human experience will actually become more important, because there are going to be judgment calls that the machines are still learning to make, and the human judgment call, the issue spotting, the making that decision is going to be critical. But at the same time, because human beings learn those judgment calls from experience, and so much more of that experience might be taken away by machines, what worries me is that humans will have fewer opportunities to make smaller mistakes. We all learned by being in more junior positions and making decisions that don’t have as severe a consequence, and as you progress in your career, you make bigger and bigger decisions in the sense that they impact more things. This is how humans have learned, but now, many of those smaller decisions increasingly might be taken away from humans. So, how do we learn? As that judgment becomes more important, the ability to form those judgments might be diminished, and that’s what I really worry about.

Zach Coseglia: Nitish, what’s your thought about the amplified role, or the changing role of the human, and of human intelligence in an increasingly artificial world?

Nitish Upadhyaya: I think it’s a really valid point—we take experience through mistakes that we make or learn over a series of years. What we need to learn is just different, I think, in my view. Our critical thinking capacity, our ability to think in abstractions, our ability to be innovative in how we view the world—those are our real strengths. And look, I don’t think we need to create machine learning models that replicate exactly what human beings do—I think that’s a real failure, because then you’re creating two of exactly the same thing. What you want to be doing is creating a model or a system that complements the human piece. Now, what can machine learning models do? Review in the health care context, what can it do? It can stop bias. It can stop people seeing something but not seeing it properly. So, inattentional blindness is a big thing. In one study, a series of radiologists were asked to look at an image, and in that image, a photo of a gorilla had been inserted. Now, 17% of those radiologists found that and said, “There’s a real anomaly.” The rest of the radiologists did not see that as anomaly. They looked everywhere else, and they found what they were expecting to see. A technology model would have seen that and called it out as an issue. What there is for the human being to do is think about, “How do I speak to the patient? How do I couch the operational risk? How do I think about all of the other pieces of evidence that are available to me and what we can do?” So, I think it’s not about creating something that takes over what a surgeon does—it’s about complementing, supplementing, and enhancing what the human being is able to do, thereby removing some of the foibles and problems that come with human decision-making. Zach, you know I talk about this a lot: Where’s that human piece coming from?

Zach Coseglia: We talk a lot within our Lab and on this podcast about the importance of human centricity. Our work, whether it’s in the compliance space or the DEI space, or whether we’re just delivering more traditional legal work in a more modern way, is all about leading with this idea of putting the human at the center of the analysis. That’s reflective of our focus and interest in behavioral science, but it’s also very much inspired by the principles of user-centered design and design thinking. What is design thinking? What is user-centered design?

Nitish Upadhyaya: Ultimately, it’s about making sure that an experience, a product that you are putting together, meets the needs of the individual or group that you’re catering for. Making and reducing barriers to entry and making sure that the experience itself is smooth and people are able to get to the output is the essence of user-centered design. It’s about appreciating that a human being, at the end of the day, is the person that’s picking up your tool, your product, your service. And that applies not just to physical products—this is where I think people can be more creative about what they do. It applies to training. It applies to how you speak to individuals. It applies to physical spaces and how you design them for interaction in the post-COVID world, coming into the office and making sure that they work together. I love the example that one of the big movie studios, an animation studio, when they created their new offices, realized that people from different departments generally bumped into each other when they went for a comfort break. And so, they created toilets at either end of the building, so you had to walk through many different places to get there, but generally, the physical design helped to spark innovation and creativity. So, user-centered design, not just about the iPhone or your iPad, it’s much broader than that.

Zach Coseglia: When you say it like that too, it just crystallizes how relevant it is to the areas that we operate in. It’s not just about your iPhone, it’s also about the policy that you’re developing. When you want people to understand their accountabilities, when you want them to understand risk, when you want them to understand what expectations are in terms of shaping behavior, creating a policy that is 15 pages long, single-spaced, dense and difficult to understand, with a lot of legalese, it feels like something that isn’t very human-centered. Now, the interesting thing is, we talked to Benjamin van Rooij about this, and he actually did a study which suggests that maybe making something a little bit more human-centered in this context doesn’t work. But as a matter of common sense, it seems like it just makes sense for us to spend more time thinking about the human when we’re building a compliance program.

Nitish Upadhyaya: Absolutely—we have to be intentional and deliberate about what we’re doing. If we can influence the stories that people are telling in an organization by designing great experiences in the compliance space, then we’re onto a winner.

Hui Chen: This discussion reminds me of some of the interactions I had with companies when I was a compliance counselor expert at DOJ. Companies would come in, they would walk us through, for example, “We had this third-party due diligence program. There’s a portal. Employees would just go there.” I remember I would just pause at them and say, “Okay, so tell me… I’m an employee. I’m going to hire a pizza vendor to cater our lunch. What do I have to do? What does the screen look like?” And for most of the time when I asked that question, I got that reaction from the compliance folks across the table like, “I had never thought about that question. In fact, I have never actually even seen the screen that the employees would have to start their process with.” What about you, Nitish? What have you seen in this space in terms of where would you rate the industry on its human-centered approach?

Nitish Upadhyaya: I think in its truly human-centered approach, it’s reasonably nascent. There are lots of people who are doing amazing things with design—they are trying incredibly hard. What we fail to do is realize that it’s not just about individuals, it’s about systems, and creating the entire system that supports an intervention that you are looking to engender, or a change that you’re looking to make is the right way forward. We are getting there in the compliance space about appreciating this is compliance for human beings, and not for robots—that’s fine. We’re getting away from the linear idea of, “If I train for X, Y will be the outcome.” But I think the reason I say it’s still nascent is we haven’t appreciated the wider ecosystem in which this works. Just changing some of the policies, pretty pictures, whatever it might be (and I’m being reductionist here), that isn’t just going to do the job. So, good start, but a lot of work to be done.

Zach Coseglia: You said that’s reductionist—it is a reductionist way to approach it, but I don’t think you’re oversimplifying it. To go back to what I said before about policies, I think that actually is the thinking that’s often happening. It’s people getting in a room, well-intentioned, and trying to design for the human. What’s missing from the equation is that you have people designing for the human. You don’t have people going out and asking the human, “What do you need? What do you expect? What is going to help you?” That’s how the products that we know and love are developed. It wasn’t just a bunch of brilliant, well-meaning people in a room coming up with innovative ideas. Their innovative ideas were in part shaped by market research, by outreach, by talking to people, by giving people an opportunity to experience the product, and having that experience and that input, that user-centered perspective then incorporated into the design. That’s what’s missing.

Nitish Upadhyaya: I couldn’t agree more. I think there is a lot to be done in appreciating and understanding the individuals that work in an organization. Hui, it’s something that you and I talk about a lot as well in the training space. Not everyone needs the same training. Not everyone has the same context and the same responsibilities, or the same risks and the same pressures. Often, businesses find that it’s much more efficient to roll something out to everyone in the same way, having spoken to maybe a few people or laboring under their own assumptions about what works and what doesn’t work for them, rather than the organization, and that doesn’t land for lots of people. And it’s, again, a change in mindset that’s needed.

Hui Chen: I also want to recognize that there are compliance programs out there that, whether they admit it to themselves or not, exist not for the people but for the regulators. When they write a policy, they never intended for people to read them. The policy is written for the regulators to read. The training programs are done for the regulators to see. Now, I can’t speak really for regulators, because I’ve never worked for regulators, but I have worked for law enforcement, which is DOJ. Don’t forget, whether you see it as such or not, the people who are regulators or law enforcement are also humans. This is the point that people often forget—they think, “I’m going to write a policy that the DOJ, the SEC, or the OIG is happy with.” Those are human beings too. They also are regulated by themselves—they have regulations and laws, they experience their own internal compliance, and they can look at your program from their human perspective, as well.

Now, I would love for us to talk about your training work. So, you’ve been doing quite some research about compliance training—perhaps you can just tell us a little bit about your latest project on this.

Nitish Upadhyaya: Yes, absolutely. It’s the real fascination of mine. Training and good experiences, making sure people have time well-spent, and interesting outcomes from facilitated sessions is such a joy for me, a personal joy—it gives me so much pride when people walk out of a session with a smile. And so, this research is really close to my heart. People often conflate training and learning—we see them mentioned in the same breath. While they are very much linked—the design or the intended outcome of training is learning—we often forget that training can fail to produce learning if it is designed badly, if people are not concentrating, if the environmental factors are not right, or the content is not correct. Learning doesn’t just happen in a training environment—you don’t just end up going to a series of six courses run by your organization, and woo-hoo, you’re trained. You’re trained by your supervisor, you’re trained on your desk, you’re trained by the interactions that you see happening, and the communications that come out, the work that your competitors are doing, the regulators are doing—all of it is ultimately influencing your decision-making on a day-to-day basis. With that in mind, I was trying to figure out: What does good compliance training look like? And how do you measure whether or not training has been effective (has it caused a change)?

So, I’ve really looked at two key areas. There’s the big picture point and the small picture point. The small picture point is: Did a training session, a series of demonstrations, experiences or feedback sessions change the individual participants’ knowledge—if that’s what you intend—skills, attitudes, behaviors? Remember, you’re not just training for knowledge. It’s not just about how much of the FCPA you can memorize after a 45-minute training session. That’s one aspect that’s incredibly important. The second really important aspect that people often forget is a point that I mentioned earlier, that this training happens in an ecosystem. It isn’t a magic bullet that’s going to fix everything because you sit everyone down in a room. It happens against the background of what people on the team are doing, what your bosses are saying, how people are dealing with grievances or ethics issues, and in so doing, that’s probably the most fundamentally important thing. Can we understand what stories people are telling on the ground? Can we then use that to figure out what training they actually need as sales teams, procurement offices, or people who are sitting on the front reception? We use that to understand what the objectives of any one training session could be, implement the training, and have it supported by procedures, by communications, by changes in processes, by what your leadership team are doing, and then, go about measuring that change not as a result of how many multiple choice questions have been answered, but by changes in the stories that people are telling about the organization. How comfortable they feel about whistleblowing, how supported they feel by compliance, how happy they are with the way the management deal with ethical issues—that’s the real change.

Hui Chen: That was fascinating, Nitish. Given what you’ve said, can you give us some examples of how framing in the compliance space can be reimagined?

Nitish Upadhyaya: Absolutely. This is where we get to put our creative thinking hats on and really have a bit of fun, while hopefully generating some effective results. How can we be clever about compliance budgets? We don’t have unlimited resources—we can’t train every individual for their individual job role. What about using something like social network analysis? We come back to that technology aspect to identify the influences in a department or a desk and give them additional training that then seeps out and changes the norms on a desk. That’s one way in which you could use technology and the relationships that people have to engender some targeted training in what goes on. The other aspect is thinking very carefully about how you support people at the point of making a decision. They’re not going to look at huge policy manuals, but can you train people to identify the red flags or where they might be issues, and then reflect those in the systems themselves, whether it’s to do with expense claims or onboarding a new supplier. How much of that can we take off the knowledge retention aspect of an individual, where they have to go dig through or figure out what’s going on, and just simply plumb it into the procedures that they follow, giving them easy access. Interventions that work over a series of months and years to reinforce what’s going on, rather than the simple one-hour, tick-box exercise, I think, can really change things. Ultimately, if there’s a lesson to any of this, it’s that a mindset shift can dramatically change how participants feel and interact with the training. Treat it as a workshop, treat it as a facilitated session. In some cases, it’s a piece of theatre—the organization is on show. What you get people to interact with and stick in their minds is ultimately the most important aspect. It’s not about knowledge all the time—maybe it’s about attitudes or behaviors. So, there’s lots of different interacting ways of making something new and changing the experiences we often have in a training environment.

Hui Chen: I often hear people talk about gamifying training. What you’re talking about is much more than gamifying training—it is really building training into that ecosystem. To me, that really changes that traditional classroom approach to training—on a preset schedule, you all gather together and be downloaded some kind of knowledge. What I’m hearing is real-time interventions, building the training as part of the ecosystem, and—just to go back to the point that we’ve been driving all along—really thinking about how that human is going to experience this, make use of it, and how that’s going to help that human in doing what this person does at his or her job.

Nitish Upadhyaya: I think that’s right—every interaction is a training opportunity. Everything you’re doing is feeding an individual information about how they make decisions in the organization, what is seen as right and wrong. If you can move away from that classroom-based training or the tick-box attestations, and think about a portfolio of interventions including training that wrap around a topic, whether it’s anti-bribery training, whistleblowing, harassment, anything else, then not only do you have more opportunities to impact individuals, not only are you being more consistent in your messaging, but also you have a lot more data points to measure whether or not this is working.

Hui Chen: That is a completely revolutionary way of looking at training. Thank you so much, Nitish—that’s very exciting.

Zach Coseglia: The ABCs of the Lab are “analytics, behavioral science and creativity.” A lot of what you’ve talked about today really hinges on the ability to reimagine something, to think innovatively, to be creative. I think sometimes in the ponds in which we swim, whether it’s ethics and compliance, even our DEI work and ESG work, and certainly the legal profession more broadly, we hear people who don’t identify personally as a creative person. Talk to us a little bit about the importance of creativity, but even more so what you say to those who don’t think of themselves as a creative?

Nitish Upadhyaya: I have a two-year-old daughter, so I’m really in this creative, fun mode at the moment. We’re enjoying ourselves painting, exploring the world for the first time through new eyes. I often tell this story. A kid is speaking to her father. She says, “What do you do for work?” He says, “I’m a creative drawing coach. I’m an art professor. I teach people how to draw.” And she says, “You mean they forget?” That always sticks with me. We are such creative individuals, whether or not we think so in the professional space. In our personal lives, you might take photographs, you might cook, you might design things, you might play with your kids—whatever it is, that is one of our superpowers as human beings, that ability to imagine beyond what you can see in front of your very eyes. So, to those people, I think, who think they’re not creative, I just ask them to think about one thing they’ve done in their personal lives which brought them joy and that is probably likely to have been something creative, however they imagine it. Reframe what creativity is—it’s just about looking at the world in a slightly different way. You are the only person that has your own unique perspective. We add that to seven, eight, ten different people, like we’re lucky enough to have at the Lab, and suddenly you get this amazing melting pot of ideas that can really change things.

Zach Coseglia: That’s amazing. Alright, Nitish, now it’s time for us to get to know you even better. We’re going to ask you a series of questions—these are the same questions we ask everyone who joins our podcast. The idea, in the spirit of user-centeredness and human-centeredness, is just to peel some additional layers back and get to know Nitish. This is inspired by Proust, as everyone knows, by Bernard Pivot, by James Lipton, by Vanity Fair. I will ask you the first question. There’s actually a choice here, Nitish—you can answer one of the following. The first question is: If you could wake up tomorrow morning having gained any one quality or ability, what would it be? Or you can tell us: Is there a quality about yourself that you are currently working to improve, and if so, what is it?

Nitish Upadhyaya: I’m going to go with the first one. Mine would be able to teleport because I would love to be everywhere. In the way of being artistic and creative, what a wonderful way to get lots of perspectives without racking up the carbon credits.

Zach Coseglia: Nitish, in this scenario, are you teleporting in the universe or in the multiverse? I think this is really important.

Nitish Upadhyaya: Absolutely—multiverse surely. You want to know what happens in all different circumstances, because that’s the scientist in me.

Hui Chen: The next question is also a duo that you can choose one of two. Who is your favorite mentor? Or: Who do you wish you could be mentored by?

Nitish Upadhyaya: I’m going to pick the latter. Someone whose work I’ve followed from afar and I’m incredibly interested in is the Microsoft CEO, Satya Nadella, who’s changing of culture at Microsoft is reasonably legendary, I think. And what an interesting person to be able to speak to, discuss not only the business side, but actually how you generate productivity and create an exciting, interesting culture which fosters creativity, but also diversity.

Zach Coseglia: A really good answer—I like that. Alright, the last set of pairs for you, Nitish. What is the best place you have ever worked? Or: What is the best job, paid or unpaid, that you have ever had?

Nitish Upadhyaya: I’m going to go with the latter. When I was 16, I sold TVs, and it still remains one of the foundational pieces of who I am. I spent an entire summer talking to lots of people, including some celebrities who wandered into the shop, and I would sell them all sorts of exciting, what were then, cutting-edge, leading televisions. It was a wonderful view into the world of so many different lives, about what they were doing with them and why they wanted them, and their families. Maybe that was one of the inspirations for user-centered design—really getting to know someone.

Zach Coseglia: The next is our series of more rapid-fire questions. Hui, why don’t you take the first one?

Hui Chen: What is your favorite thing to do?

Nitish Upadhyaya: Cook.

Zach Coseglia: Lots of cooks. Amazing. What’s your favorite place, Nitish?

Nitish Upadhyaya: Nepal.

Hui Chen: What makes you proud?

Nitish Upadhyaya: The smile that you see on someone’s face when you create a cool experience for them.

Zach Coseglia: What email sign-off do you use most frequently?

Nitish Upadhyaya: “Best wishes.” I think that’s quite profound. I wish everyone the best.

Hui Chen: What trend in your field is most overrated?

Nitish Upadhyaya: I think in behavioral science, there’s a real move towards trying to find generalizability and things that work in all circumstances, but really, I think it’s so important to understand context and individual circumstances, so that’s a trend that needs to be looked at a little bit more.

Zach Coseglia: Finally, Nitish, what word would you use to describe your day so far?

Nitish Upadhyaya: “Stimulating.”

Zach Coseglia: I love that. Well, you stimulated us, so thank you for that, Nitish. And thank you all for tuning in to the Better Way? podcast and exploring all of these Better Ways with us. For more information about this or anything else that’s happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple, Google, and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for Better Ways we should explore, please don’t hesitate to reach out—we’d love to hear from you. Thanks again for listening.