Research Ethics Reimagined

In this episode of PRIM&R's podcast, “Research Ethics Reimagined,” we explore the intersection of artificial intelligence, ethics, and studying human behavior in the digital age. Our guest is Mary L. Gray, Senior Principal Researcher at Microsoft Research and Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society.

What is Research Ethics Reimagined?

“Research Ethics Reimagined” is a podcast created by Public Responsibility in Medicine and Research (PRIM&R), hosted by Ivy R. Tillman, PRIM&R's executive director. Here, we talk with scientists, researchers, bioethicists and some of the leading minds exploring new frontiers of science. This season, we are going examine research ethics in the 21st century -- and learn why it matters to you.

Ep 2 Mary L Gray_FINAL
===

[00:00:00] Ivy Tillman: Welcome to Research Ethics Reimagined, a podcast created by Public Responsibility in Medicine and Research, or PRMR. Here we talk with scientists, researchers, bioethicists, and some of the leading minds exploring the [00:00:15] new frontiers of science. Join us to examine research ethics in the 21st century and learn why it matters to you.

[00:00:22] Ivy Tillman: I'm your host, Ivie Tillman. Let's dive in.[00:00:30]

[00:00:33] Ivy Tillman: Today, I'm pleased to have with me Mary Gray. I'm Who is the Senior Principal Researcher at Microsoft Research and Faculty Associate at Harvard University's Berkman Klein Center for Internet and Society. [00:00:45] She also maintains a faculty position in the Luddy School of Informatics, Computing, and Engineering with affiliations in anthropology and gender studies at Indiana University.

[00:00:55] Ivy Tillman: Mary earned her PhD in communication from the University of California at San [00:01:00] Diego in 2004. In 2020, Mary was named a MacArthur Fellow for her contributions to anthropology and the study of technology, digital economies, and society. Mary has authored several books, including Ghost Work, which she co wrote with computer [00:01:15] scientist Siddharth Suri.

[00:01:16] Ivy Tillman: Which explores the invisible human workforce that powers the lab at Microsoft. Mary co founded and chairs the Microsoft Research Ethics Review Program, which is the only federally registered institutional review board of [00:01:30] its kind. Hello, Mary. And thank you for joining us today on Research Ethics Reimagined.

[00:01:35] Ivy Tillman: I'm looking forward to our discussion as we dig into your thoughts on our current and reimagined ethical frameworks associated with data and artificial [00:01:45] intelligence.

[00:01:46] Mary L. Gray: Thanks so much for the invitation. I'm really excited to talk with you, Ivy.

[00:01:49] Ivy Tillman: Yes. So let's get started. Can you describe how you began this unique work that sits at the intersection of research ethics and AI?

[00:01:59] Mary L. Gray: I [00:02:00] mean, it's a funny thing to realize that I've been thinking about these questions since graduate school and in many ways because my IRB, which was a fantastic group of folks. Struggled with how to [00:02:15] understand what are the expectations of researchers who are working, in my case, at the time, with young people who were identifying as, or coming out as lesbian, gay, bi, trans, or questioning, and they weren't [00:02:30] exactly sure how to handle the ethics of reading discussion.

[00:02:34] Mary L. Gray: Posts online or having websites that revealed a lot of information, personal information about young people. So it really started me on a quest both to [00:02:45] inform other researchers about how they could think outside of the federal regulations when they're looking at a lot of social activity that that otherwise is treated as text removed from people's lives.

[00:02:58] Mary L. Gray: And fast forward to [00:03:00] 10 years ago, and Microsoft research was just starting to think about how it would bring social scientists to the table as they're, they were developing systems for society and really imagining that most of what we would be thinking about was [00:03:15] what's the impact, what happens after technology is present, but my interest is actually what happens both before and after.

[00:03:22] Mary L. Gray: Technology's enter people's lives and often the ways in which, whether you have technology in your life or not, it, it shapes you, [00:03:30] it has a force to reckon with.

[00:03:32] Ivy Tillman: That's really interesting how your research informed where you are now and still leading that effort. Do you see yourself as an advocate in this space?

[00:03:42] Ivy Tillman: Oh,

[00:03:43] Mary L. Gray: that's such an interesting [00:03:45] question because I feel like in a lot of ways I, am an advocate for the value of these basic questions you can bring to any methodology, but particularly for social sciences to understand the human [00:04:00] condition and our place in the world that we would want to start with an awareness of the value of engaging people.

[00:04:06] Mary L. Gray: So it's always been a, you know, that's the heart and soul for me of anthropology. Uh, that's my disciplinary home. And I joke all the time, [00:04:15] like there's no way to do anthropology in a responsible way that's sustainable if people don't know what you're doing. As soon as they find out what you're up to, you're banned, um, appropriately.

[00:04:25] Mary L. Gray: So, so it's a, it's a kind of logic of seeing [00:04:30] research, uh, as a Uh, particularly basic scientific research as always about a deep engagement with people's experiences and their social interactions, not just their. individual moments at work or at [00:04:45] life. It's to see how much we're always trying within the social sciences to understand social worlds.

[00:04:51] Mary L. Gray: So it, I've become an advocate for thinking about computer science and engineering is very much involved in the social worlds that we [00:05:00] now can't quite imagine without Say a mobile phone or other kinds of technology being a part of that,

[00:05:06] Ivy Tillman: As you were describing your journey, that word just kind of illuminated to me, Advocacy and, and really kind of leading that effort of [00:05:15] explaining, Because, you know, with IRBs and ethics boards, there's this, Prescribed way of considering research and that prescribed way really does not fit what's happening right now.

[00:05:27] Ivy Tillman: No. And so we need advocates in this [00:05:30] space, right? I mean,

[00:05:30] Mary L. Gray: I think that, you know, I think the hardest part is, for me, the exciting thing about science and disciplines as they form, particularly when it's the ways of thinking and the methodologies that disciplines have, when those start [00:05:45] coming together and creating.

[00:05:46] Mary L. Gray: new ways of asking questions, or even new ways of, of thinking about what questions to ask, that should mean we have to update our priors about ethics. And so it's, it's exciting, but it is, [00:06:00] it's so clear that we haven't, particularly for For the disciplines that feed information systems, there hasn't been that moment of reckoning with how deeply involved these, not just these technologies, but [00:06:15] quite literally when it comes to something like artificial intelligence, how dependent innovation is on studying, like, what are people doing there?

[00:06:23] Mary L. Gray: You know, what do they think in there? Who are they talking with? You know, that's, that is a fundamental [00:06:30] reality of systems that rely on a lot of data generated by people to advance. I'm pretty passionate about this.

[00:06:37] Ivy Tillman: Well, and I mean, I can tell your energy is also energizing me around it, thinking about how informing the IRB [00:06:45] about the ethics associated with data and people's behaviors particularly.

[00:06:50] Ivy Tillman: But let's take it to the disciplines, right? How would you suggest IRBs engage or even, you know, research ethics professionals engage with these [00:07:00] disciplines who for so long have not even considered what they're doing as being research? I think about that on the daily.

[00:07:08] Mary L. Gray: I mean, literally, when we started the, when we started the ethics program at Microsoft Research, [00:07:15] there were several of us who had the very practical matter of we need this to publish.

[00:07:20] Mary L. Gray: So, you know, I mean, that's often backing into any ethical review process as I have to. It breaks what is actually really valuable [00:07:30] about how. Those, those questions, those basic questions of respect, beneficence, justice, they are also methodological techniques for different disciplines. That's, that's been true.

[00:07:42] Mary L. Gray: So knowing what we [00:07:45] know after 40 years of thinking about what it means to bring this framework to research with people and people's data, it was a chance to think beyond. I have to have it. How does this actually [00:08:00] improve? How does it improve the data I'm going to generate? How does it improve my longstanding relationships with groups, individuals who are going to continue to inform what I learned about the [00:08:15] world?

[00:08:15] Mary L. Gray: How will it help the public trust me more and trust my students? So it, you know, it started out as both the pragmatic need of, I have to have this piece of paper, which is very frustrating for most IRB professionals, it's like, that's [00:08:30] a terrible thing. So, being able to approach this as computer scientists learning how will this improve their outcomes as researchers, that I think approaching it that way and having everyone [00:08:45] approach it that way and also to approach it as we don't know.

[00:08:49] Mary L. Gray: What the right methods are that are also aligned with those, um, those standing ethical expectations. And we actually could argue [00:09:00] that those ethical expectations are due for an update to really reflect what are the things we missed when we first put those on the table for biomedical research and for behavioral sciences.

[00:09:11] Ivy Tillman: I love how you framed that, you know, and [00:09:15] considering particularly how You know, IRB professionals and chairs and boards engage these disciplines coming from a different perspective of really how will it improve your research, right? And how will it [00:09:30] improve your outcomes and the public trust? To me, that's a lot easier and it builds that collegial relationship.

[00:09:37] Ivy Tillman: Versus the oftentimes, you know, the adversarial relationships that we all experience, right?

[00:09:42] Mary L. Gray: Yeah.

[00:09:43] Ivy Tillman: Yeah. Yeah.

[00:09:44] Mary L. Gray: I love that. [00:09:45] Especially for students. Like I, I mean, if there's a place I'm most passionate, it's like who comes to the IRB often is a graduate student who's not able to get the mentoring they need, or they're doing something pretty novel with their methodology and nobody can give them [00:10:00] guidance.

[00:10:00] Mary L. Gray: So I've seen that. Every summer since I've been here, we have a Ph. D. internship program. Every year, we have the best of computer science and engineering. And I can see how they yearn for that [00:10:15] guidance. And it's figuring out how to have that guidance not only come from a The programming staff of our program, but other researchers like to bring in the peer review that is robust and open [00:10:30] and curious and humble about where we're at right now because we, we've got work to do.

[00:10:37] Mary L. Gray: We don't know how to study society at scale. I mean, that's, that's a new

[00:10:41] Ivy Tillman: thing.

[00:10:42] Mary L. Gray: It's fascinating.

[00:10:42] Ivy Tillman: I'm learning new things and new ways [00:10:45] of thinking and framing, particularly, you know, my background is IRB. And so a lot of what you're saying just resonates with me, particularly as we're, we were engaging different disciplines.

[00:10:53] Ivy Tillman: So I have another question. So the world of AI is converging with the world of research and consumers in [00:11:00] ways that may surprise our listeners. We think we know, but I think you probably know a little bit more than we do. Can you share a couple of examples of how AI has been used in the past that many of us would not even know about?

[00:11:14] Mary L. Gray: [00:11:15] One of my favorites is Well, actually, let me take one step back. I think the hardest thing about talking about artificial intelligence is that it means many things. It's an umbrella term in the discipline. And so depending on who, [00:11:30] uh, particularly depending on the computer scientists, the that you're talking with, they may have a very particular definition for what is AI.

[00:11:39] Mary L. Gray: And so I think things get quite cloudy quite quickly because we are all in [00:11:45] need of some basic language and some basic understanding of what are these terms? What do they mean? So. Let me break it down. I, I like to call AI software. I mean, it's basically software. Uh, it's a very [00:12:00] sophisticated, so I don't want to diminish how sophisticated most.

[00:12:04] Mary L. Gray: of what we call artificial intelligence today is, is quite sophisticated, but the current versions that we would be all surprised to learn, like soap [00:12:15] dispensers that automatically release an amount of soap. That's a really, one of my favorite examples of that's, that is coming from a world of what's called computer vision.

[00:12:27] Mary L. Gray: That's looking at. image recognition. [00:12:30] It's trying to sense there is motion. It's a motion detector, but it's a sensor that is basically just think a tiny bit of software and hardware that's collecting information about what can [00:12:45] be recognizable. As an image and how to train the software you want to build to respond to that image.

[00:12:54] Mary L. Gray: So in the case of a soap dispenser, the building of a soap dispenser was basically [00:13:00] creating software that could say, I sense movement. And most importantly, I see an image that maps onto a hand or skin. Unfortunately, most of those [00:13:15] models were built on white skin. So, some listeners are not surprised to learn that soap dispensers do not work for their skin.

[00:13:23] Mary L. Gray: Yep. I'm one. Um, I am strangely another because I am so, so [00:13:30] white, the audience can't see me. I am, I am quite, quite pale that in all cases when it's trying to use an image, it's using millions and millions and millions of images and then it's finding what's the most [00:13:45] typical image. What's the average of that image?

[00:13:48] Mary L. Gray: So whether you're talking about skin color and how it was labeled, because importantly, it doesn't see skin color. It sees a label that says this is the most typical [00:14:00] skin color in the batch of data that was used to train. This bit of software on releasing soap. And so that's one of, you know, it's not the earliest, but it's a very mundane example [00:14:15] that I think helps us all see the way in which just a fairly, perhaps innocuous, you could say, well intentioned.

[00:14:24] Mary L. Gray: to I want to create software that provides this service that depends [00:14:30] on learning particular kinds of data can quickly go off the rails if it was never thinking you would, you would need to have an approach and we still don't have an approach. You'd need to have an approach to seeing skin color [00:14:45] that is quite different than What we do today, it's certainly different than what humans do and how we recognize difference.

[00:14:54] Mary L. Gray: So it's like a, it's a really interesting case of something [00:15:00] that I, I find quite profound because it meant that there were folks building that software who didn't think about skin color as something as biological as pigmentation, also incredibly cultural. [00:15:15] Because in most cases, they weren't looking at different pigmentation.

[00:15:18] Mary L. Gray: They were looking at pictures that were labeled white, peach, black, like that's the kind of labeling of most of the material. I don't know if that's too, too like out there, but. [00:15:30]

[00:15:30] Ivy Tillman: No, and it, it, it just aligned with our experience of the software and the product, right? And so what you're speaking of, I actually had to learn it in a book I was reading, right?

[00:15:41] Ivy Tillman: And so it explained to me my experience with the [00:15:45] product and the software behind it, right? And so now when I go, particularly, um, when I'm out in public and I have to use those, I don't get as frustrated. I understand why and I go, okay, we've got every time it's a [00:16:00] reminder of the work yet to be done. So, so

[00:16:04] Mary L. Gray: on that one, Ivy, because I think the hardest part or the understandable thing is that particularly for computer science and engineering, It's mathematical.

[00:16:13] Mary L. Gray: It's thinking very much as [00:16:15] like, we need a bigger set. We need to complete the set. So artificial intelligence really relies on a lot of examples of a decision. What is artificial intelligence doing? It is quite literally software [00:16:30] that is modeling a decision that it can see. So whether that's decision, this is when you release soap, or if the decision is this is how you complete a sentence.

[00:16:42] Mary L. Gray: Those are all cases where the current [00:16:45] techniques within computer science and engineering are operating from, we have lots of examples. So let's use all of those examples to train a model to automatically respond in many, in many ways. [00:17:00] It is what it would look like if you had those examples in front of you.

[00:17:04] Mary L. Gray: Now I'm simplifying it greatly and I'm insulting every computer scientist who might be listening to this right now, but, but really at the end of the day, it relies on, there's a [00:17:15] really clear decision so that I can model, is it this or is it that? So if you hold onto that, the hardest part about all of this is Artificial intelligence has just figured out [00:17:30] how to automate things that are quite obvious to a, to a human, but at the same time in really subtle ways.

[00:17:39] Mary L. Gray: It makes it much harder for us to see is that usually the sentence I would say? If I [00:17:45] just start saying it with auto complete, you think about all those places. Like the real challenge here is sorting through, uh, especially from our own point of view, who is lost in the assumptions in the, in the model of.

[00:17:59] Mary L. Gray: of [00:18:00] how things usually go. And so the parallels hold of like what's lost in the case of the soap dispenser, is every skin tone. And if the move is, well, I need to get every skin tone, it misses. That is a [00:18:15] Sisyphean task. When it, like, as long as people continue to have babies, we're going to have a lot of mix.

[00:18:24] Mary L. Gray: You know, probably to me, most importantly, when it comes to language or when it comes to the places we're seeing [00:18:30] artificial intelligence. Integrated into everyday decision making, like should I look at a resume or not? Those are places where we haven't fully reflected on what are our starting assumptions about what's [00:18:45] typical, and who is not able to use any of these systems.

[00:18:49] Mary L. Gray: My favorite example, code switching. If you speak more than two languages, or if you speak two languages, and part of how you speak is switching between those two [00:19:00] languages, it is nearly impossible to model that. We don't have enough examples, but even if we had enough examples, talk to a teenager and see if how they code switch [00:19:15] changes the next generation.

[00:19:16] Mary L. Gray: Oh, wow. We're just always changing how we interact with each other. That's a good thing. So we should assume that AI actually cannot effectively model anything that has [00:19:30] infinite number of variables. Possibilities and any place where we deliberate, like if you think about democracy, if the whole point is like we're debating, like, what is the best way to do something?

[00:19:41] Mary L. Gray: It's because there is not one right way to do something,

[00:19:44] Ivy Tillman: right? [00:19:45] And

[00:19:46] Mary L. Gray: I can only model where it has examples of a decision that's really clear. And crisp. It's this or it's that. Oh wow. It's

[00:19:54] Ivy Tillman: yes or it's no. That's challenging because that is not our reality. No,

[00:19:59] Mary L. Gray: that's [00:20:00] not our reality. It's every computer scientist's dream.

[00:20:03] Mary L. Gray: I know I'm really insulting my colleagues. I mean, but most of them, most people in computer science engineering, they know that what they're doing is reducing for just the sake of argument or [00:20:15] in crafting a model, the real complexity of social life. But biomedicine does it too. No. Look at all the sciences where we have been okay with a certain amount of injustice, right, to be efficient, to [00:20:30] be able to keep moving.

[00:20:31] Mary L. Gray: We're going to keep building our model, we'll get more complicated. We're introducing an approach to science through computer science and engineering that quite frankly it resists the interest in getting more complicated [00:20:45] and other than let's expand the set. So that's what I worry about, is that if the move toward Making things complicated is I'll just get a map of the world.

[00:20:57] Mary L. Gray: I'll literally just capture. [00:21:00] Every experience, we will continue to fail to see, Hey, you get one shot at that. So you leave out folks. It doesn't just mean next time you go around, you include them. You made them trust it less. [00:21:15]

[00:21:15] Ivy Tillman: Yes.

[00:21:16] Mary L. Gray: So there's that whole concern. There's also just the practical reality of there are so many things beyond the black and white, beyond the this or that, the yes or no, [00:21:30] that.

[00:21:31] Mary L. Gray: If we look at it, we'd say that's not where artificial intelligence should be, and we're not making those distinctions today, mostly because that's a very social way of seeing

[00:21:41] Ivy Tillman: the world. Right. But you know, it complicates the [00:21:45] role, in my opinion, that ethics boards and IRBs play. play in review of research involving the software or artificial intelligence, and it makes it quite intimidating.

[00:21:58] Ivy Tillman: So, you know, what advice [00:22:00] would you have for individuals trying to do the right thing, but not really knowing what's right sometimes in this case? I mean,

[00:22:07] Mary L. Gray: this is, I think this is why I am so interested in us taking on, like, this is our chance to, together, take a [00:22:15] step back and think, what are some ways through this problem that really, update how we would want to approach biomedicine, which also uses data science and all of the behavioral sciences that all use computational [00:22:30] approaches.

[00:22:30] Mary L. Gray: So this is, I mean, the reality is it's not just talking about computer science and engineering, but all of the sciences that have become data driven in ways that computationally we've never been able to do it before. [00:22:45] They all, we all need to rethink. How do we want to. I can't do this, how do we do this?

[00:22:51] Mary L. Gray: So I think, yes, it's intimidating to talk, I certainly, um, get intimidated talking with some of our researchers who are [00:23:00] incredibly good at techniques that I don't understand as a researcher. Mm. But the thing that I do know is that they are interested in modeling. I am too. And I don't [00:23:15] know a scientist who isn't interested in theorizing how things work.

[00:23:21] Mary L. Gray: That is the basic science. That is the method. That is the scientific method. So, so to overcome the, the [00:23:30] intimidation is to say, well, Okay, if we're interested in basic science, we're interested in the scientific method for generalizable and transferable knowledge, right? Not generalizing like this is how all things are all the time, but [00:23:45] that mix of what's generalizable and then what are things that are really quite specific and contextual.

[00:23:51] Mary L. Gray: That's all the sciences. So holding those together, what are ways that we can approach ethics as a [00:24:00] methodological. Challenge in the rough, like most of our ethical dilemmas come from, I would argue, researchers trying out new ways to learn about how the world works and running [00:24:15] over people in the process, you know, and that's not to forgive anybody for it.

[00:24:19] Mary L. Gray: It's to say, No. No. How do we say, then your first order of business, dear researcher, is to be thinking first and foremost, how do I maintain the [00:24:30] trust? How do I keep engagement? How do I see how diversity and inclusion are not? a completion of a set, right? That is always gonna be evolving. Like,

[00:24:41] Ivy Tillman: that, that, I'm, I'm excited about [00:24:45] that, you know?

[00:24:45] Ivy Tillman: That was very powerful, what you just said. Very powerful. And in these spaces and in these conversations that we've had over the years, we've not necessarily framed it. like that, particularly when it relates to the relationship between [00:25:00] those who provide the oversight and those who conduct the research.

[00:25:03] Mary L. Gray: I think that's the part that, that's, I'll go back to that example we were playing with earlier. It's like, I don't know anybody who's done programming who doesn't have a story to tell about a [00:25:15] graduate student who came into their office and just felt. You know, unmoored by like, how do I do this research design?

[00:25:21] Mary L. Gray: So we call it oversight, but the day to day, there are a lot of folks already who are playing that role of mentor. I [00:25:30] mean, and I think what bothers me is that shouldn't be something done entirely by a program that's outside of a discipline. Like what I'd love to see us do is like, let's get back to rings of peer review that prep them.

[00:25:44] Mary L. Gray: Yeah. [00:25:45] for a conversation with the expertise of somebody in the IRB who's bringing a different expertise. I mean, I think at this point, the hardest thing is like that domain expertise is sharp, finely tuned [00:26:00] to regulations and that's understandable. This is a regulation thing. Free zone, like we are in like, I would, I would love us to have some rules.

[00:26:11] Mary L. Gray: Don't get me wrong, but it's going to remain [00:26:15] for the near future, which is too much time.

[00:26:19] Ivy Tillman: Yeah,

[00:26:20] Mary L. Gray: it's going to remain this place where, you know, most people using these techniques of computation, using these approaches to modeling, human [00:26:30] experiences, and probably most important to me in almost all cases right now, it's We are completely dependent on people's data, which means I'm scraping the internet.

[00:26:38] Mary L. Gray: I'm buying a data set. So there's, there is no way forward with AI [00:26:45] without engaging people's material. And you're right. Like my toes curl when I think about what are the things I know that are knowable. Not just about individuals, but about our relationships with each other, [00:27:00] that to me, it's not, it's no longer about privacy.

[00:27:03] Mary L. Gray: It is about a fundamental right to the respect to my social life, to not have that treated as fodder. You know, I think we can all [00:27:15] relate to that. I feel like IRB professionals, you know, this is like, those are those folks who are always the Geiger counter for like something, something feels off. And I feel like researchers need that reflected back to them.

[00:27:29] Mary L. Gray: [00:27:30] I think the reality is for computer science and engineering, they're at the very beginning of learning. Why they should listen, how they would listen, and how they would do differently if they're trying to gauge, um, social impact of their work. [00:27:45] Fascinating.

[00:27:47] Ivy Tillman: Fascinating. You know, when you mentioned the layered peer review model, it's embedding those ethical and some of the regulatory conversations there.

[00:27:56] Ivy Tillman: Yep. So, just the limited amount of research that I've [00:28:00] conducted, it was there at the design level and because I'd done IRB work, I understood it, right? But that's not where those conversations are happening, particularly for students. For sure. Particularly.

[00:28:12] Mary L. Gray: And we could change that in a heartbeat. I mean, I think the [00:28:15] good news is like, this is all within, certainly within Primer.

[00:28:18] Mary L. Gray: I mean, I think the, one of the things that really drew me to Primer was here is the folks who were, I won't say on the front line, that's You know, that feels too, too loaded, but they're often in the [00:28:30] position of seeing that something could be done differently. That's going to enhance the public's connection and valuing of scholarship.

[00:28:38] Mary L. Gray: They can see it often a mile away and you know, it's being able to get those campuses [00:28:45] with a new set of researchers who are actually quite interested in not breaking things anymore. So it's kind of exciting to me that we have this whole new cohort. of a discipline that's never thought it was in need of this [00:29:00] conversation when it comes to these conversations.

[00:29:02] Mary L. Gray: And I've got a decade to prove that of these students who come in and they want to know like, how would I be respectful? What would that look like? You know, they, they, they. They're good hearted people. That is so encouraging. They're the best. Oh, wow. I mean, [00:29:15] that's why I'm like, oh, they're fine. The kids are all right.

[00:29:17] Mary L. Gray: Mm hmm. Mm hmm. And that's true across most of the disciplines. I mean, most, I think most of the social sciences and the biomedical sciences have been learning. Yes. Diminishing returns if you do something just to [00:29:30] get a paper out. We don't have time for that.

[00:29:33] Ivy Tillman: I love how it's evolved. Just in the. 23 years that I've been involved, I've seen the evolution and it's exciting to see, particularly with our students.

[00:29:43] Ivy Tillman: They're beginning to think about [00:29:45] ethics and doing the right thing and wanting to. And I think that's where, that's the intersection that we sit in right now. Um, the opportunity. Yeah. Yeah.

[00:29:54] Mary L. Gray: And that's like one cohort, you know, that's like within five years. 10 years, we could [00:30:00] just move them all to a very different place.

[00:30:03] Ivy Tillman: That's exciting. And encouraging. I wanted to ask you another question. Since AI often gets an overall positive or a negative framing, right? Is there a middle ground or a way [00:30:15] forward for best use cases?

[00:30:17] Mary L. Gray: Oh, I'm glad you asked that because I think in many ways, this is our opportunity to define that middle way that is seeing where it is a really great tool [00:30:30] for reproducing what we rely on as a typical outcome.

[00:30:35] Mary L. Gray: Being able to separate that out from places where we know we don't want to override our ability to deliberate. So put plainly, like any place where we want [00:30:45] to be able to have a genuine discussion and openness to like, what direction should we go? That's a place that we want to be mindful and intentional about keeping AI at bay [00:31:00] because it's not going to help us.

[00:31:01] Mary L. Gray: So for example, if I'm trying to decide, do I send Is my coworker an angry email right now? Don't automate that. Don't look at my past emails and try and figure out for [00:31:15] me. Right? Perfect example. Yeah. So I, you know, I, I, the hardest thing is like, I think we can see why, why we would want, not just that we want to because we'll get a better outcome, but because we want [00:31:30] to maintain our humanity.

[00:31:33] Mary L. Gray: That we want to make explicit decisions about, we could use this AI here, we could use it for hiring, we could use it for firing, we could use it for evaluating students [00:31:45] abilities to learn in class. We can use it for all those things, but there are places where doing so is overriding and condition us to stop paying attention.

[00:31:58] Mary L. Gray: And that is not to say there [00:32:00] aren't great applications of AI. There are. So back to the example of the soap dispenser, that's fixable. It's actually gotten better. So if you're having a bad experience with a soap dispenser, it's just that it's fixed. Old. It hasn't been updated. So it can get better. The middle way is [00:32:15] seeing what are the ways in which it cannot improve because there are places we both don't.

[00:32:21] Mary L. Gray: Want to override creative novel ways of being in the world [00:32:30] and because it'll get it wrong because both those things are different, but both are true. There are things that can't do and there are things we shouldn't want it to do in the

[00:32:38] Ivy Tillman: first place. Exactly. Oh, wow. Perfect conclusion. Thank you, Mary.

[00:32:43] Ivy Tillman: Thank you for just spending [00:32:45] time with me talking about this and expanding my ways of knowing and thinking, but also just, you know, lending to the beginning of many conversations that we want to have at Primer regarding artificial intelligence and this ethical framework and engaging [00:33:00] disciplines. So thank you.

[00:33:01] Ivy Tillman: My

[00:33:01] Mary L. Gray: pleasure. I feel like Primer has been a beacon for a long time and it's a really amazing opportunity for, to bring what it knows And the collective intelligence, as you said, of this, this community to the table [00:33:15] where it hasn't really been engaged as much as it could be. So I'm,

[00:33:18] Ivy Tillman: I'm happy to keep talking about this stuff.

[00:33:20] Ivy Tillman: Well, I will definitely be reaching out to you. And thank you. My pleasure. Thank you.

[00:33:29] Hanna Holman: [00:33:30] Thank you for listening to Research Ethics Reimagined, a podcast created by Primer and produced by Syntax in Motion. Please subscribe and share with your friends and colleagues. To learn how to become a member of Primer, please visit us at At www dot [00:33:45] primer dot org, be sure to join us next month as we continue our conversation with scientists, researchers, bioethicists, and some of the leading minds exploring new frontiers

[00:33:55] of [00:34:00] science.