Brains, Black Holes, and Beyond

In this episode of Brains, Black Holes, and Beyond, Senna Aldoubosh and Lina Kim sit down with Dr. Andrés Monroy-Hernández, a professor in the Computer Science (CS) Department to discuss his journey into CS, creative uses for Artificial Intelligence (AI), and addressing AI biases.

Show Notes

In this episode of Brains, Black Holes, and Beyond, Senna Aldoubosh and Lina Kim sit down with Dr. Andrés Monroy-Hernández, a professor in the Computer Science department to learn more about Social Computing. Dr. Monroy-Hernández discusses his journey into CS, creative uses for AI, and addressing AI biases.

This episode of Brains, Black Holes, and Beyond (B cubed) was produced under the 146th managing board of the Prince in partnership with the Insights newsletter.

For more information about Dr. Monroy-Hernández's research, feel free to visit his page linked below.

RESOURCES

https://www.cs.princeton.edu/people/profile/andresmh

CREDITS
Written and hosted by Senna Aldoubosh and Lina Kim
Edited and sound engineered by Senna Aldoubosh
Transcript by Lina Kim
Produced by Senna Aldoubosh 

For more from the Daily Princetonian, visit dailyprincetonian.com. For more from Princeton Insights, visit insights.princeton.edu. Please direct all corrections to corrections@dailyprincetonian.com

What is Brains, Black Holes, and Beyond?

Brains, Black Holes, and Beyond (B Cubed) is a collaborative project between The Daily Princetonian and Princeton Insights. The show releases 3 episodes monthly: one longer episode as part of the Insights partnership, and two shorter episodes independently created by the 'Prince.' This show is produced by Senna Aldoubosh '25 under the 147th Board of the 'Prince.' Insights producers are Crystal Lee, Addie Minerva, and Thiago Tarraf Varella. This show is a reimagined version of the show formerly produced as Princeton Insights: The Highlights under the 145th Board of the 'Prince.'

Please direct pitches and questions to podcast@dailyprincetonian.com, and any corrections to corrections@dailyprincetonian.com.

Senna 0:14
Hi everyone, welcome to Brains Black Holes and Beyond, a collaboration podcast between the Princeton Insights newsletter and the Daily Princetonian. From the ‘Prince,’ my name is Senna and my name is Lina. Today's guest on the show is Dr. Andrès MonroyHernandez, an assistant professor in the computer science department, Dr. MonroyHernandez got his Bachelor's in Computer Engineering at the Monterrey Institute of Technology and Master's and PhD in Media Arts and Sciences at MIT. He now sits as the director of the Human-Computer Interactions Lab at Princeton, and teaches social computing and advises technology executives. Welcome to the show.

Senna 0:51
So our first question for you is what initially got you interested in computer science, more specifically, human technology interactions?

Dr. Monroy-Hernandez
Yeah, I guess like everybody when they were kids, learning how to program, I guess at the beginning, I was more interested in science, generally speaking, I wanted to be a physicist. In fact, I enrolled as a physics major in college. In Mexico, you have to declare your major when you apply to college. So I was fully on physics. At the time, I felt that kind of programming computer science was something that I could learn on my own, because that's what I was doing in high school and middle school. But then once I started taking CS classes, I became really excited about it. And I was like, Okay, I might just take more of these kinds of classes.

Dr. Monroy-Hernandez 1:35
I guess earlier, my first experience with computers was, I was in a sort of competition in middle school in the middle 90s. It was not very common for people to have computers at home, at least in Mexico it is very rare. But my friend and I enrolled in this competition, and we got a computer as an award for winning that competition. And so I think that got us really into it. At the time, it was like one of those, you had to choose a color for the screen, either as green and black, or
like white and black or yellow. And so all the monitors were monochromatic, there was only one color, but it was even then was super exciting to play with that.

Lina 2:18
And so I mean, now, you go from these middle school competitions to creating, you know, many significant social media platforms like MITs Scratch, Microsoft's Meeting Scheduler, Significant Otter to name a few. So we were just wondering what that process was like, and how did you navigate through sort of the ethics of that?

Dr. Monroy-Hernandez
Yeah, I guess in general, I've always been interested in the part of computer science that is less theoretical and more application-centric. So thinking about how you build tools that people use, and bring people together in new forms. And so broadly speaking, my area of expertise is social computing, which is about the design and study of computational tools that bring people together, like two people, like in Significant Otter, an example for couples, all the way to millions of people like, you know, Scratch and other tools like that. So I guess I'm interested in that intersection between the design of the tools as well as the design of the interactions between people, and so if you're organizing a party, it's as much about the room as like, what are the activities you're gonna have, that I really liked that sort of, in between the technology and the social.

Dr. Monroy-Hernandez 3:28
And you asked about ethical implications?

Lina
Yes.

Dr. Monroy-Hernandez 3:32
I guess, there's always been, at least in my, from my perspective, like,
because of these interests that I have in not just the tools, but like, the tools and the people, when it comes to that, there is always going to be tension. You know, people want different things. And so, some of my earlier experience with some of the like, the ethical aspects of computing, were when I was in grad school actually working on the Scratch community, even though it was a community for kids to you know, build and program and express themselves creatively in positive ways, there will always be tension. So one of the early tensions that we had was around like, some kids who were really into making games, to Scratch versus other kids who were into making stories with Scratch. And so, oftentimes, those distinctions were across gender. And so the boys, sometimes will be more into making games. That's not always the case. But that was how some, some members of the community perceived it. And one of the challenges that we had is that we wanted to highlight on the website, on the community, the variety of things that people will make, not just one kind of project, but like very different kinds of projects. So we really like to highlight all sorts of things, from stories to games, to animations, interactive art, and so on. And so some of the tensions that we had were like people contesting our choice for what we will be featuring on the homepage, what will come at the top, how should it be designed, and so on. And so there were kids who were just like misbehaving with each other and being mean,
as well as things like copyright infringement. So like, a lot of kids were very protective of their projects. So we really had to think through how to design systems to support certain kinds of collaborative creativity and things like that. So, at least my perspective on how I've been involved in these ethical discussions is like, how to resolve conflicts between people and how these tools sometimes may heighten those conflicts and emphasize those conflicts, and sometimes they can actually help reduce them.
Senna 5:43
Talking more about, like, the conflicts with these interactions, like there are a lot of bounds to artificial intelligence error. So sometimes, like criminal assessment systems can have issues, some softwares can be sexist, racist, some sort of, like, bias can be seen in these softwares. How are scientists going about mitigating these errors?

Dr. Monroy-Hernandez
Yeah, I mean, I think there's a few different things. Some people who are engineers often think, Oh, we’re making tools, somebody else will figure out how people use those tools. Luckily, that kind of view of the world is kind of diminishing, as people see that it’s not just tools it’s really about how people adopt these tools. But in terms of how people are trying to address them,

I guess there's a few different perspectives. One is to kind of reject new forms of technology until it's ready. And some communities in the US, like the Mennonites, don't just accept all as it is, they're like, “Okay, we're gonna see and see how it develops, before we bring it into our community.” I guess that's one perspective. And I see from the technical side, the way that could be implemented is by blocking certain kinds of things. So ad blockers is an example. It's not necessarily about AI. But it's a technology that is blocking something else that you don't like, or more legal takes on that, like, in Europe, there's a lot of legislation around privacy. And have you heard about the GDPR?So the idea is how do you enforce better privacy, management of the data, through legal means, and charge companies a lot of money when they violate those things. And so I will say, I really like this professor at Harvard, who's had this model, about how to make change in society and technology's role in it. So he talks about how code is one aspect of how you can have influence. So like the design of these systems, and one is law, like, as I was saying, like GDPR. And other legislation, the other one, he argues, is like norms or like the culture, and how things are adopted and used and so on. And lastly, market, you know, sometimes tools that are not beneficial may actually not do well in the market. But you cannot rely on only one of them. So, even if you see technologies today that are successful in the marketplace, that may not be good for society, you know, there is a culture around how to think about that in new ways. So I will say these four different pieces, code, law, culture, and markets are probably some of the more effective ways to address these biases, and more specifically with AI bias. And it's not an area that I actively work on, but my close collaborators, Olga Russakovsky, in our department, has been doing a lot of work on acknowledging the biases in the data that is being collected to train AI, and then trying to figure out ways of addressing that bias. Like, how do you collect more representative data from different geographies and things like that? I think that's one angle that I see very promising.

Lina 8:49
Yeah, and speaking about, you know, the broader topic ideas like culture law, computer programming. Where do you see the future of society headed as we see more in social interactions being held in technological spaces or virtual spaces?

Dr. Monroy-Hernandez
Yeah, I mean, I guess right now we are in this moment of like, you can have a utopian or dystopian view, and it probably is going to be somewhere in between. But I guess the optimist in me feels like we probably will have more and more tools that give more voice to a broader range of people. And also kind of democratize access not just to the technology, but to the making of that technology. So like some of the work that we were doing with Scratch, and actually even some of the work that we're doing now here, is really about thinking of: how do we enable more people to think of technology as something that they can create with and construct in the way that benefits them as opposed to something that you passively accept? And so that's one angle of that and then I guess, in the dystopian view of things is that technology will continue to highlight some of the cultural and societal inequality,
negative aspects of society, from racism, to discrimination to, you know, colonialism, all of those negative things can be emphasized even more and made worse through technology. And so, my hope is that we can help prevent those things by architecting these technologies in ways that make it harder, obviously, probably, it will be really difficult to make it impossible, but also make it harder, and also, help communities or groups of people kind of boost the kind of positive and pro social behavior on online, or just in general with technology. So I guess it's going to be the question of the next few years, really, how are we going to manage and manage these? Andwe see this every day, like Twitter today, it's imploding. And there are these different tensions around how we might better the

social media environment, for example. And so I think there's good opportunities to either change what we have.

Senna 11:00
This is kind of not really as related to the idea of, you know, the future of society and technological attractions and things like that. But what is something that's misunderstood in your field or a misconception that the general public has?

Dr. Monroy-Hernandez
My field is human computer interaction. And I think when people hear that, they often think of
the early stages of our field was focused on one user, one device. And so a lot of the work in HCI was focused on how do we make a better mouse? Or how do we measure more effective ways of, you know, designing physical or digital interface? Like the size of the button and stuff like that. And I think that's become more of

an area of very active development among practitioners. Like if you work at, say, Google, you may, as an engineer, be designing different versions of a button and then doing like, large scale data analysis to see which one works better and things like that. But I think the field now has moved away from that. And it has focused a lot more on kind of what we were just discussing, like the societal implications of technology, as well as on
Unknown Speaker 12:05
thinking collectives coming together and using the technology, not just one person, one computer. So I think a lot more has been moved towards that lately. So I would say that's more of a misconception from an academic perspective. That people would think HCI equals designing of UIs. That's kind of a small part of it.

Lina 12:30
Yeah, and I guess to finish it off, I mean, is there anything else that you would like to include, ,any take main takeaways tokeep in mind, I guess, and related to, you know, social media, technology in general?

Dr. Monroy-Hernandez
Yeah. I guess, since you guys talk to students, maybe an invitation to reach out if they're interested in any of these topics, we have a few different projects, maybe I'll tell you a little bit about how we landed on those projects. So I had been, before coming to Princeton, I spent about 11 years in the tech industry for Microsoft, and then Snapchat. And throughout all those years, there were like, always projects that I wanted to do, but that didn't have a connection to for-profit, kind of goal of a company. But I thought they were really important for society. So my goal here is to work with students in building public interest technologies. And so that is thinking about what alternatives are maybe to existing platforms or to existing technologies that are not designed just to make money for a company but to better society. So like a few examples of things that we're doing are building a decentralized and locally owned alternative to DoorDash and UberEATS, kind of these food delivery companies like what would it look like if they were owned by say, unions of taxis or drivers or by local restaurants, or by municipalities and so on, we are also exploring things around like augmented reality to help kids learn programming and you know, kind of like a version of Scratch for augmented reality. So generally speaking, just thinking about public interest technology, I think this is an area that will be great for us to connect with more students.

Senna 14:11
Do you just have any general advice for students listening or listeners in general?

Dr. Monroy-Hernandez
I mean, I guess one thing is like, even if you are not an engineer or a computer scientist, that there is a lot for people to contribute in technology development, from philosophy to design, you know, to social sciences, I feel like technology is like everywhere now. And part of you know, every single thing we do, and so I feel like the more people are engaged in those decisions, the better. And I feel like one thing that maybe I feel disappointed sometimes is that we often

observe the problems with technology, but don’t propose or work towards making better versions of that. And I feel like even if one is not a technologist, you can still be part of that movement.

So even today, when we were talking about the Twitter implosion, there is a big movement towards a federated alternative to Twitter, called Mastodon. And I think, even if one doesn't run an instance on Mastodon one could try playing with that, and using those alternatives. So I guess every single click or download of an app that you do is in itself a kind of participation in an environment that you may or may not agree with. So I think just think making those decisions more conscious, will be really helpful for everyone.

Senna 15:31
Thank you so much for joining us today. That was super awesome. Thank you so much. Yeah, thank you.

This episode of B Cubed was hosted by me and Lina Kim sound engineered by me and produced under the 146 Managing Board of the press. For more information about Dr. MonroyHernandez his research, visit the links in the podcast episode description. From the prints My name is Senna Aldoubosh. Have a great rest of your day.

Transcribed by https://otter.ai