A Ropes & Gray (RopesTalk) podcast series from the R&G Insights Lab that is a curiosity-driven hunt for good ideas and better ways to tackle organizational challenges.
Zach Coseglia: Welcome back to the Better Way? podcast, brought to you by R&G Insights Lab. This is a curiosity podcast, where we ask, “There has to be a better way, right?” There just has to be. I’m Zach Coseglia, the co-founder of R&G Insights Lab, and I am joined, as always, by my friend and colleague, Hui Chen. Hi, Hui.
Hui Chen: Hi, Zach—Happy New Year. And Happy New Year to everyone out there. I can’t believe it’s 2024.
Zach Coseglia: Happy New Year. It’s 2024—that’s pretty amazing. Hui, what are we going to talk about today?
Hui Chen: I think we’re going to try to take a peek at the beginning of the year. I don’t like the word “prediction,” but we’ll try to look at what we think might be some of the trends and where things might be moving in the world of ethics and compliance in 2024.
Zach Coseglia: Terrific. Let’s dive right in. What is your first non-prediction for 2024?
Hui Chen: It’s basically that my mind, like so many people’s, goes to where AI is going to take us. Of course, I feel like 2023 was the year when AI got on everybody’s tongue. There was almost some kind of incessant hype of ChatGPT, and what it’s doing, what jobs it can do for us, what jobs it might be threatening to replace—all of those. I think in all of that conversation, the compliance profession, like everyone else in 2023, was very much still grappling with, “What could it do?” and some experimentations about, “Let me try to use ChatGPT for this and see how it comes out.” I think this is a technology that evolves so quickly that in the technology itself, we’re likely to see major breakthroughs in a lot of the things that it couldn’t to in 2023—it’s going to move towards a lot greater capabilities.
What does that mean for the people in the corporate organizational space? In terms of corporate strategy, I think for business, for sure, companies are going to increasingly use those for all kinds of business purposes. I think compliance and ethics professionals have an opportunity here to shape how this technology is used for business purposes. They can even have a voice as all the governments around the world are trying to grapple with the ethical issues involved in this technology. But all of this requires us to truly embrace the technology and just throw ourselves into learning about what it is, and I think that really needs to be the first step. Certainly in 2023 what we observed, Zach, was a lot of people were using a lot of terms without really understanding what they were. I honestly started seeing people using the word “AI” to refer to things that spreadsheets have been able to do for decades. Telling the spreadsheets to generate a graph for you is not really AI—that’s been available technology for quite some time as just simple software. AI, machine learning, software capabilities—we know that businesspeople are embracing them and they’re using them, and in order to understand the risks involved, and the risks and the ethics involved in the use of these technologies, you’ve got to understand them.
Zach Coseglia: Where do you see the biggest opportunity? Is it in ethics and compliance professionals becoming more versed in emerging technologies, including AI, generative AI and machine learning, so that they can advise the business on the risks of using them to advance commercial goals, or do you see it as an actual tool to make compliance better?
Hui Chen: It’s both. But, so far, the chatter I’ve heard in the ethics and compliance space is mostly focused on the latter: How do we in the ethics and compliance function use these technologies? I have not heard much, at all, about compliance and ethics professionals being in a role of a valued advisor, in having conversations with their business partners about how the business is using these technologies. The ethical use of AI is a conversation that’s going on at the Congressional level and at the international regulatory level. I’m not hearing the voices of ethics and compliance people in those discussions.
Zach Coseglia: Let’s take those, actually, in turn. Let’s start with the latter, which is how these new technologies, such as generative AI, can be potentially used for compliance purposes. I’ll share with you, I think, one of the lowest-of-low-hanging fruit that I see, which actually reminds me of conversations that I was having when I was in-house 10 years ago. At the time, we were using the phrase “chatbots” to help employees understand the requirements of a policy or to understand the policy environment more generally—to actually use that technology to eliminate the sometimes burdensome volume of questions that compliance professionals get about what is allowed and what isn’t, what the expectations and the accountabilities are, and what the expected behaviors look like. I think that that goes beyond just compliance—I think that any kind of enabling function within an organization (compliance, legal, HR, information technology) is probably burdened with those kinds of questions. And so, to me, 10 years ago when we were having these conversations, it seemed like a good idea, and I think there was often energy and momentum, but technology at that point wasn’t ready to be able to enable those sorts of things in a cost-efficient and effective way. Now they are—and so that, to me, is the lowest-of-low-hanging fruit. How can we either reimagine our policies or simply reimagine the way that we communicate policy requirements and expectations to employees using generative AI?
Hui Chen: There is certainly a good amount of chatter about that usage. There is also some nervousness related to that, of course, because what if the one answer it gets wrong is the one that’s really critical? I can imagine, for example, that you get someone who actually puts in a somewhat slightly more complicated pattern than usual or does not give the chatbot enough information, and the chatbot doesn’t know to ask questions back. A lot of times when people ask compliance questions, we ask 25 questions back because we need to understand more. That’s what chatbots and GenAI are still not very good at, is asking questions back and probing the people who are asking the questions and who issue spot. This is not to say the technology will not continue to improve—it is going to get better and better at that. I do think that is a space that a lot of people are thinking about deploying that technology, but certainly they would want to do so with a great deal of caution, a good amount of monitoring and testing, and making sure that the system continues to learn from the mistakes it makes.
Zach Coseglia: What else are you hearing chatter about?
Hui Chen: Honestly, I really do hear more chatter based on misunderstanding than actual use, and I do think that speaks to why I think it’s really important for people just to take a couple of hours to read up on the materials or listen to Rumman’s podcast from last year—that would be helpful. Also, Shannon, her podcast addressed a lot of these issues, as well.
Zach Coseglia: So, your thinking—in terms of, again, not a prediction, but a potential trend that we might see in 2024—is maybe less focused on the operationalization of advanced technology, and more on just getting smart so that you’re able to make decisions about technologies we might want to operationalize and where there may be use cases that actually align with compliance.
Hui Chen: Absolutely. I do think a lot of people do start to attempt to operationalize without understanding, and that’s just where mistakes are made or investments turn out not to be so great. The first thing you want to do, before you seriously want to think about deploying something, is understand it.
Zach Coseglia: Yes, I couldn’t agree more. It actually very much, I think, connects to the other part of the AI discussion, and the GenAI discussion more specifically, in terms of how compliance and ethics professionals can actually be advisors to the business on these things. I actually think that that is a huge opportunity, because as the business continues to look for ways to use these tools to advance commercial goals and to advance their core business, I think there’s a tremendous amount of risk associated with it, just as there is in the operationalization on the compliance base. There’s a whole host of different risks, from the quality of the underlying data to the validity of the information that it’s pushing out. We’re talking about bias in the data, both in terms of bias that may be just giving you improper or inaccurate results, but also bias in the data that may actually be illegal, problematic or against your values as an organization. I think that part of the reason why you’ve heard less about ethics and compliance professionals being those advisors is because I actually think you need to deeply understand the technology in order to be able to be an effective counselor or advisor to the business on these things. And so, part of what I see as an opportunity is not just getting smarter on the part of ethics and compliance professionals, but also HR professionals who are involved in how these things are being used from a training perspective, talent management perspective, recruiting perspective or DEI perspective, and others across an organization—it’s more than just those existing people “getting smart.” I think it’s also about what skillsets we need in the team to be able to advise on these things. Personally, if I were going to be advising commercial or business leaders on the risks associated with AI and emerging technologies, I wouldn’t just want the smartest lawyer in the room, I’d want Rumman Chowdhury in the room. I’d want ethics experts who understand the technology, and I hope that we see more of that kind of multidisciplinary team approach as a way in which the functions that support the business are able to more effectively support them.
Hui Chen: Yes, without a question. This is something an ethics and compliance professional really should be having input on. If we’re going to have a governance structure about how we use AI as a business organization, who should be in that governance body? You need lawyers, you need businesspeople, and you need the people who understand both the technology and the ethics of it. Good, sound business decisions are made when you have these different kinds of expertise complementing each other and challenging each other in a room. I also think this discussion, interestingly, mirrors what I’ve always thought of as the ethics and compliance distinction: “ethics versus compliance.” This is actually not about “compliance,” because there are no rules right now for you to comply with. The essence of “compliance” is that there’s a rule and you follow it—you make sure your organization does what the rule requires. This is now an “ethics” space—we’ve got this new frontier that we’re all entering. What does that mean for us as an organization? How should we be using it? How should humans interact with this technology? What does that mean for our business model, our customers, our suppliers, our employees, and all of those “ethical” questions?
Zach Coseglia: That’s right. It’s not the what can we do? It’s the what should we do, or not do?
Hui Chen: Exactly. It’s an unprecedented opportunity, really. It’s not often in history where we have an entire society-transforming technology that’s quickly emerging and dominating people’s daily lives, and it’s happening so fast that there is very little regulation in place. This is an opportunity where you can not only influence your organization, but you can influence what regulations and rules do come into play.
Zach Coseglia: For sure. I want to share one of my thoughts for 2024, which is, I think, an offshoot of the one that you’ve just shared, but I’m going to put a slightly different spin on it. I think that what we are likely to see, and what I feel like I’m already seeing, is a bit of a technology reckoning in some of these spaces. I say that as a distinct thought from what you’ve shared, because I think that what folks are going to realize is that there are a lot of places where the solution that they need is actually something short of the shiniest, most modern, most sophisticated GenAI-powered tool. In the wave that I think we’re all riding right now—because of the very real but also pop culture-influenced attention to GenAI—I think folks are going to start seeing other technology opportunities, ways that they can make their processes more efficient, ways that they can ensure that their human capital is being spent doing the most strategic, value-added tasks, and using other technologies to support more routine and rote activities. In order to really be able to successfully implement the most sophisticated machine learning, AI-powered and GenAI-enabled solutions, you’ve got to love, curate, manage and connect your data. And I think that folks are more and more realizing that, in order to win this game, they’ve got to treat their data itself as an asset. There may actually be a lot more effort, energy and successes that fall short of the most sophisticated GenAI, but that actually is getting people on the path toward being able to implement more sophisticated solutions in the future.
Hui Chen: We didn’t plan this conversation, but it’s like one topic dovetails another so perfectly. That exactly leads to the next point that I would want to make, which is about the transformation of data analytics. I think 2023 was yet another year where data analytics gained more recognition and acceptance in companies’ ethics and compliance programs—there’s no question that it’s a concept that’s here to stay. But what I have certainly seen so far is—similar to the lack of full understanding of AI and associated technologies—there is some mystique about the term “data analytics.” And it’s interesting. You interviewed Matt Galvin from DOJ on data analytics—last year was the first year that he had been a full year in that position, so obviously bringing more prominence to the use of data analytics by having someone in that role for a whole year in 2023, and now continuing. He said something along the lines of he encounters a lot of companies who talk about data analytics as some kind of “spiritual concept to which they haven’t had the awakening”—I thought that was hilarious, and I do think that’s true. What I see is a lot of, again, misunderstanding and misusage of the term, particularly from those who are not really doing it. I feel like there’s some kind of pressure, that if you’re not doing data analytics already, then you feel like you should be doing it, but you don’t really know how, so you just sort of dress up your own metrics and call them “data analytics.” Recently, somebody presented to me their “data analytics”—literally they gave me a slide that said, “Our Compliance Program Data Analytics”—and it was just how many trainings they conducted, how many third parties they vetted, how many people they have and how many hours they worked. I was like, “That’s not data analytics—that’s your KPI. And, in fact, that’s actually the old-fashioned KPI, not even the really updated KPI for your compliance program.” Just calling it “data analytics” isn’t going to make it so.
I think part of the continued growth of data analytics is also enabled by exactly what you say, the expansion in technology. If you were trying to do this five to 10 years ago, to build a data analytics platform for your company, it would cost a lot of person-hours and it would cost you millions. Now, there are multiple vendors that basically have very good, out-of-the-box solutions that you can deploy right away with pretty minimal efforts at customizing. Of course, you can make it more sophisticated, you can customize more, but if you just want to do something basic for a company, it’s not that hard—it’s a lot easier than it was five years ago. So, I think this advance in technology, which was certainly fueled by this AI trend, is going to continue to raise expectations about the use of data analytics. Now, back again, to the learning point: none of that sophisticated technology or out-there-available tool is going to help you if you think your training completion number is the data you need to analyze—those tools are not going to be all that helpful to you. It really requires you to, again, understand, “How do you use data to tell the story of your compliance?”
Zach Coseglia: Hui, I couldn’t agree more. It’s part of the reason why we have a data storyteller on our team, and it’s part of the reason why we’re building a team of data scientists. You mentioned the interview that I did, the fireside chat without a fire, with Matt Galvin at that conference. One of the things that I thought was really special about that conference is that for the first time, I believe, it was co-chaired by a data scientist and engineer, not a lawyer. This is someone who we will have on the podcast in short order who leads an analytics function within a large company’s compliance department. And you need that to be able to really advance it. You’ve got to have the knowledge of the underlying expertise and the ability to think differently about how we could use the data to answer the questions that folks have, and to tell the stories that they want to and need to be able to tell.
Hui Chen: Yes, exactly. This is what I call moving from “data analytics” to “data narrative.” What you really want is to use your data to tell a story that naturally compels people to action.
Zach Coseglia: What I hear and what I see, and what we help folks with day in and day out, is they’ve gotten to the place where they are using data to communicate compliance efforts (maybe compliance performance), but what’s happening is people are feeling like they’re getting a dump of data—they’re getting numbers, facts and figures. What they’re not getting is a narrative or a story, and that’s a big part of where I hope and I do think we’ll continue to see maturation over the course of the next year.
Hui Chen: It has to be not just data, but what they mean for the audience that you’re presenting it to, and what you want them to do with it.
Zach Coseglia: “What you want them to do with it?” Yes: overcoming the “so what” factor is, I think, one of the biggest barriers and one of the biggest opportunities for compliance professionals and others within an organization that are supporting a business. That’s one of the biggest opportunities to really add value. All right, what else do you have?
Hui Chen: We’ve talked a bunch about technology, it’s probably time we talk about humans. So, there’s a couple of worries that I have about just technology generally as machine learning and artificial intelligence takes over more and more rote tasks. I keep thinking that’s how humans learn, too. This is how many professions have trained their newcomers, from medical interns to new lawyers. Certainly, when I started, I did incredibly boring and mundane things like summarizing trial transcripts, but that was how I learned. I didn’t know that then—I wish somebody had told me that, “Yes, somebody needs to be summarizing the trial transcripts, but the reason you’re in this dark room doing it for eight hours a day for two weeks is not because we just need someone to do it, but you learn.” It’s like watching a trial up close, because you’re reading everything, seeing every question and every objection, how it’s ruled on, how the stories are laid out with each witness, how someone is cross-examined—all of those things I learned by having to read them. Now, you can get a bot to summarize trial transcripts—if not now, then soon, probably by the end of 2024. We’ve got to figure out ways for people to learn certain things, because the way we used to learn may have been taken away from us, so that’s a general worry that I have.
I think we need to think about how we’re training the new generation of workers in our profession generally, but I also want to make sure that we don’t lose sight of the fact that all this technology is meant to make humans more efficient. Certainly, in our ethics and compliance space, it’s meant to help us identify human-created risks and influence human behavior, and that requires you to have that focus on how human beings are experiencing all of this in the context of their organizations. You and I both have read a ton of company policies and procedures, and one of the first things I experimented on with ChatGPT was to have it draft policy and procedures. Many times, I have read policies and procedures drafted by humans that read like they were drafted by machines (or at least sounded like they were meant for machines to read), and then, you have a machine draft a policy that’s actually plain language and very easy for humans to read. I found that really ironic. I do think somehow that whoever has built the ChatGPT technology remembered that it’s human beings that are using it, and so they have produced results that are easy for human beings to use, whereas human beings sometimes forget that.
Zach Coseglia: Maybe there actually is an opportunity here for the human to become more human as a result of a technology-driven change in circumstances. I think about what you just shared, and your concern about folks losing some of those rote tasks that defined the early years and the introduction into a profession, and wonder, maybe it actually means that we’ll accelerate progress, because we’ll start getting more substantive experience sooner. We’ll be in the room sooner. We’ll be asked and forced to make decisions sooner. We’ll have tools that we can use, like generative AI, to help us expedite things that otherwise would have taken us a lot of time in ways that actually bring value to the human. Part of the reason why we created this Lab is we want those policies to be more human, the training to be more human, the program to be built with an awareness of the human being who’s on the other side of it, and the analysis that’s being done of risk to be more than just policy-based, but people-based, that it’s actually focused on behaviors. I think that the only way we get there is if we actually put a premium on that, and we make it part of the education. And so, that actually is this wonderful, unexpected, unintentional throughline from everything that we’ve talked about today. If we’re going to use generative AI and other sophisticated tools, we need to get smart on them. If we’re going to do data analytics more effectively, we need to get smart on it. We need to have experts in the room who can advise on these things as part of a multidisciplinary team. And to combat some of the human challenges that may come out of the operationalization of these new technologies in ways that make the human role different, we need to train, educate and reset the expectations of our humans in ways that are going to actually create the future that I think we hope for, which is a more data-driven, but also a more human-centered form of compliance.
Hui Chen: What we’re saying, really, requires a lot of humility and realizing that you always have more to learn, whether it’s about machines and technology or about humans. On the human side, a lot of it is just listening. This is something that we have talked about with various guests on our podcast, and I know I’ve talked about it at conferences, is the importance of listening. Listening can take many forms, so it could be you walking into the cafeteria and sitting next to someone and listening to them, but it can also take more structured forms of surveys or cultural assessments. But what I really think is important is the sense of understanding humans through their stories. Stories sometimes capture so much that you can’t even explain in other words. There were times that I would just say, “You know what? Instead of trying to explain all this to you in a lot of analytical sentences, I’m just going to tell you this story.” And by the time I’d finished the story, they’d be like, “Ahh, that’s very telling.” You always get that reaction, because when you recount a story, there are layers to it—about how people are interacting, how people are thinking, if they are able to grasp certain concepts, what they think about those concepts, what those concepts mean to them—all of those can easily be captured in a story. I think the listening and the storytelling have been really underutilized in companies’ compliance program efforts to understand company culture. And so, I think just more consciously inviting the storytelling from the people we listen to has been very informative in the work that certainly I’ve done since joining the Lab.
Zach Coseglia: I fully agree. And it leads me to my last non-prediction, my hope for the future, and the direction that I think we are very much going, which is normalizing true culture reviews in the way that we treat risk assessments as second nature. I think that, increasingly, we see folks realizing that culture is at times their greatest organizational risk, or one of their greatest organizational risks—not just from a legal, ethics and compliance perspective, but also from a performance perspective. With that realization, I think we’re seeing more effort being put into meaningfully assessing culture. When I say “meaningfully assessing culture” or “doing a true culture assessment,” what I mean is—something more than a couple of questions in an employee engagement survey, or a periodic pulse survey that goes out to your employees, or something more than a traditional Likert multiple-choice “strongly agree/strongly disagree” question—to your point about bringing out the stories of an organization and defining culture in some ways as the stories that folks tell about their place of work. I think that there’s a long way to go to truly normalizing that kind of more meaningful culture review—one that includes more storytelling-driven and more qualitative and quantitative data collection from employees, more focus groups, more interviews and more intentional discussion about culture. There’s a long way to go, but I think that it’s the future of ethics and compliance, the future of an effective and meaningful diversity, equity and inclusion program, and that it’s critical to organizational performance. And so, I think we are on that path, but it’s my hope that we continue, and that maybe we even accelerate, the work that’s being done in that space.
Hui Chen: I couldn’t agree more. I think people are realizing the limits of the more traditional survey methods. More often than not, I see surveys that basically have leading questions and also containing concepts that can be very subjectively interpreted from one person to another. Ultimately, the employee’s choice is only to “agree” or “disagree”—they can’t bring anything nuanced or say, “Yes, but…” And that really is severely limiting. Certainly, as an employee who’s taken them, I just feel like, “There’s more I want to say. Can’t you just give me a little space to enter free text even?” So, I think even that’s a good start, but I think the barrier there is that a lot of people don’t know how to organize that qualitative data. Of course, it takes more time, but you can always start small. You can always start by just adding one free-text box to allow people to say something at the end of your survey—that’s a place to start.
Zach Coseglia: Yes, I fully agree. And I look forward to all of the incredible discussions that we’re going have over the course of the next 52 weeks. With that said, Hui, thank you very much for another wonderful conversation, as always. Thank you all for tuning in to the Better Way? podcast and exploring all of these Better Ways with us. For more information about this or anything else that’s happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for Better Ways we should explore, please don’t hesitate to reach out—we’d love to hear from you. Thanks again for listening.