Roblox Tech Talks

In this episode of Tech Talks, Roblox founder and CEO David Baszucki speaks with Matt Kaufman and Eliza Jacobs about how Roblox is advancing its commitment to safety, civility, and freedom at scale. 

They discuss new features like Trusted Connections and age verification, how AI and human moderation work together to protect the community, and the importance of building systems that adapt to cultural norms and parental expectations worldwide. The conversation also covers  how Roblox empowers developers and parents with tools to create safe, age-appropriate experiences and how these innovations are setting a new standard for online platforms.

What is Roblox Tech Talks ?

Discover the people, ideas, and technological breakthroughs that are shaping the next iteration of human co-experience. Featuring a diverse cross-section of guest speakers from Roblox’s product and engineering teams, we’ll discuss how we got to where we are today, the road ahead, and the lessons we’ve learned along the way. Hosted by David Baszucki, founder and CEO of Roblox.

Ep 27 Safety
David: [00:00:00] Hey, welcome. I'm Dave Baszucki and you're listening to Tech Talks where we talk about everything at Roblox Technical and otherwise Today safety version 2.0. We are back with Matt Kaufman. We're joined by Eliza Jacobs, and this one's really gonna be good 'cause we are just about ready to introduce a bunch of new stuff on Roblox.
We're gonna be talking about trusted connections. We're gonna talk about age verification. We're gonna talk about policy and how policy interacts with that. But before we go, maybe just a quick welcome Eliza. Matt, welcome. How are you doing today? We're good. Good. Okay. Matt, little introduce yourself. Maybe share what you do at Roblox.
Sure.
Matt: Uh, so I'm Matt Hoffman. I'm the Chief Safety Officer at Roblox. Um, and I've been here for about eight years. Um, and maybe something fun about that is that when I started working at Roblox, the entire size of the company was now significantly smaller [00:01:00] than the team we have working on safety. A lot has changed in eight years.
David: Uh, a lot has changed. Good. So it's, it's arguably the foundation of everything we do. And, and after you've introduced yourselves, I'll tell a quick story about it. So good news.
Eliza: And I'm Eliza Jacobs. I'm Senior Director for Product Policy here at Roblox. I joined about four years ago to found the product policy team, and I've just seen it grow and explode and expand, um, and working in lock.
Step with the product teams and the operations teams to really bring the safety baseline to Roblox.
David: Okay. And so just taking a step back, when we think about the whole safety group, chief safety officer, and when I think about safety, I think, uh, safety group, we have product, we have engineering, we have policy.
We have live ops, which is like our global network, and I believe we also matrix in design and data science. Right. Is that, are those the six major components?
Matt: Yeah. I think when we think about safety, we really think about, um, what are all the [00:02:00] different pieces that have to come together to work very closely to make the platform as safe and as civil as possible.
And I think when I stepped into, uh, the safety role a few years ago. The goal was like, how do you get all those people working together as closely as possible? And we've actually seen a lot of advances that we've made. You know, even like when we think about AI and we think about how we implement ai, like working very closely with our operations teams and the policy teams has made that process go much faster.
David: Well, I was gonna ask you about advances you've both seen, and maybe I can set the table with those advances. The uh, the advance I remember is about a month into Roblox. There's about a hundred people on the platform. Arguably, we just, this weekend, I believe, past 21 million concurrent, so it's a lot easier with a hundred than 21 million, but even at that point, we could see.
Just the normal things you would expect from a hundred creative people all chatting at the same time. [00:03:00] And Eric said, we gotta build a safety system. And so that very first safety system even very early on was people could report bad things. We got giant lists of the things they reported, what what had been said, and.
Each of us spent, um, one every fourth day as a moderator. So just reading through this and trying to figure stuff out, I know we've come so far since then, so, um, but Matt, what was it like when you joined?
Matt: When I joined, um. There was, obviously safety was an incredibly important part of Roblox, but a lot of the safety was really handled by humans.
Looking at all of those abuse reports and fielding all the questions, and you can do that when you have a smaller number of people on the platform, but when you're approaching numbers, you know, like 21 million at the very same time, you really need machines to help with that. So when we think about safety today, we think about all of our first line decisions being made by a computer.
David: Yeah.
Matt: And then [00:04:00] we look at those decisions and we do QA on them. We also look at when users say like, Hey, I think a decision was made and like, I wanna appeal that. I don't think that was right. And that's where our humans step in. So our humans are really, there is the backstop behind all of the ai. Um, and one thing that I'm really proud of is, um, our rule, which is that we will never implement an AI algorithm unless it is more, um, accurate than humans.
And it's really important to understand that, like if you ask a human to make a decision. One person can make a pretty good decision, but if you're asking thousands of people to make decisions, millions of times a day, machines are much better at that. And so we set ourselves a very high bar for when we implement these algorithms.
David: That's lovely to hear. We always say that publicly. Some people, um, over the, the years have watched our financial statements and, you know, seen any shift in the ratio of humans to AI and say, we're not taking safety seriously, but the internal knowledge that AI can be better than [00:05:00] humans and we only use it in that case, has given us enormous scale.
And so how about when you joined Eliza? You know, policy for me was almost a newer thing. We, we had all the product, we had all the technology, we had lists of what we would moderate and whatnot. And my impression is since you've been here. We've really organized it and almost like good computer software engineering, you know, bad software is lots of redundancy and double code and stuff all over.
Good software is no redundancy. Very organized and I, that's the impression I get. From our policies.
Eliza: Absolutely. Yeah. When I joined four years ago, so there hadn't been a policy team before and the moderation teams had been doing an amazing job of trying to keep track of how they're making decisions and lists and sort of what you were talking about when you started.
But there wasn't a central team whose job it was to document those decisions and make it scalable. Right. And the only way to have thousands of agents. Um, or AI systems make the same [00:06:00] decision on the same type of content every time, is to really do the deep research and write it down just like good code.
So policy is the code of decision making on the platform, um, for all manner of issues, whether it's safety issues, the kinds of content we allow, um, or issues in customer service or issues in our IP operations. So really drilling down where is the line that we wanna draw. Documenting that line and then launching it both to humans and machines and making sure they're making consistent decisions.
And in the last couple of years we've actually started using AI to iterate on our policies much more rapidly, sort of in our internal systems. How can we test the policy internally with AI before launching it to humans or machines? And it's made our whole policy infrastructure more efficient.
David: Now I think, I think we talked about this and I was fully transparent, but there was a time, maybe five or six months ago, we were looking at some new policy and we had a big list of all the things that would be affected by the policy.
And I [00:07:00] took that list and ran it through various LLMs and I just said, Hey, what is the policy that most concisely defines all of these things? And I, I was surprisingly. Excited to see what the LLM was spitting out was very similar to what your policy team had been writing, which seemed like you know when writing software non-redundant, the simplest description of that.
So, is there some notion of trying to write the simplest policy for the most complex set of situation?
Eliza: Yeah, I always tell my team I want the least number of distinctions to get the maximum impact, right? I don't wanna make more distinctions or be more complicated. 'cause if we're launching to global scaled operations and thousands of moderators across the globe, that's not gonna work.
So we have to be as simple as possible to get the outcome that we want.
David: And, and a nice thing about your policy is, uh, I think even four or five or six years ago. There, there, there's this vision of, at 3:00 AM someone [00:08:00] calls the CEO and wakes the CEO up and says, we gotta make a call on this thing. I don't have that feeling at all because I, I feel our policy is very consistent.
You would handle it and I wouldn't be making those types of decisions.
Eliza: Right. Policy is about making the decision when you have the time and space to do the research and think through it and not in those emergency situations.
David: Okay, so one of our values is respect the community and I, I think there's a little bit of a notion of, we of course, work on things ourselves, but it reflects beyond Roblox into the rest of the internet.
I all, I know there's one thing we're gonna talk a little bit about, which is the 13 through 17 segment and some of the things we have. Maybe before we dive in, any comments like, I, I feel we, we can actually go beyond just thinking about Roblox. What we do actually shows up as an example for other companies as well.
Do you have any, you know, anecdotes around that?
Matt: Sure. I think when we think about, as we start with thinking [00:09:00] about our users and our community. And when we think about them, when we think about what they're doing on a day-to-day basis, they're not just using Roblox. They're on lots of different platforms.
They have different accounts. They're, you know, they're doing all kinds of stuff. And so when we think about like driving safety and civility, you really have to recognize that we just see a slice of somebody's life and they're spending time and doing lots of other things. And so it makes safety really complicated because we have to try and discern from like our small sliver of what's happening, what else is happening in these users' lives, especially when it comes to what we call critical harms, where somebody's safety is at risk.
And what we see ourselves as doing is trying to set a very high bar in our policies in how we, um, proactively look for a violative content and behavior on the platform. And really like how we offer technology for even other companies to be able to leverage some of our learnings because it's not really just about making people safe on Roblox, it's about making people safe across the internet.
Um, because of back to [00:10:00] respecting our community, our community really does live on multiple platforms.
David: That's a fun clue in also on the policy side and, um, coming back to 14 year olds. I know that's a clue in because there's just a very wide range of what 14 year olds can do on the internet. On, on a Roblox side, we're analyzing all the text, analyzing the speech.
We have a lot of policy in place. I know there's other places where there's. Less of that from a policy standpoint. That's right.
Eliza: Yeah. I think, you know, first of all, we're one of the few platforms that is all ages. And so we have to really think about our full audience and our full community. Um, and I take really seriously the fact that for a lot of kids, Roblox is the first account they ever have on the internet.
Yeah. Right. So we get to teach them how to be safe and civil online through how our product works and how our policies operate. Um, and they can take that to other platforms. They can say like, I learned on Roblox that like, this is not okay and I'm not gonna do it. On other places either. Um, but we also take [00:11:00] very seriously the fact that as kids grow up, there are different things that are developmentally appropriate at different ages and stages, right?
And what's appropriate for a 14-year-old is not necessarily what's appropriate for a 7-year-old. So we wanna sort of start from a safety baseline in a more conservative place and slowly open the platform up as kids grow up that's safe and appropriate for them.
David: Yeah. Well, I think there's a, uh, we talk in company meetings sometimes about how grateful we are really that we started with under 13, as well as over 13 for that last 18 years.
Right. It build up the, the muscle and the technology. Uh, to handle that. And at the same time, I feel, you know, publicly sharing that, look, we want to go under 13, 13 through 17, 18 through 21 and 21 and up and handle all of those in age appropriate and policy appropriate ways. Also shares a little vision about like what we want to do on our platform,
Matt: right?
I mean, we really want the platform to be safe and civil. No matter [00:12:00] who you are, no matter what age you are, when you show up, the platform should understand. What's expected and make sure that you feel safe. And all of the work that we've done around policy and AI to make a platform that adapts no matter whether you're five years old or you're 25 or anywhere in between, is really exciting.
And, um, I think that that's a part, a place where we innovate and where we have a lot of learnings that we can share with other people. Um, but it's really an opportunity for us to lead. And share our learnings and what we've done on our platform with other, other platforms to try and make everybody safer on the internet.
David: Okay, so we're going to, um, dive straight into this age of 13 through 17, as well as all the other ages on our platform. We're introducing a new concept that there's friends or connections, and then there are trusted friends and trusted connections. And this is part of a vision that of course we [00:13:00] want certain 15 year olds to be best friends in real life.
Best friends on Roblox just talk about almost anything they can imagine. Have free conversations, but there's also 15 year olds that maybe there's other people we don't want them having that completely free communication with. So can you, can you give us a big, high level view of what this is all about?
Sure.
Matt: So what we're really trying to do is allow you, our users to differentiate between friends that they're really close with and friends that is just like a casual acquaintance. Yeah. And we know that, you know. For teens in particular, they're friends that they're really close with. They have very open, very deep conversations.
I think this is just a really important part of growing up, and it's expected for that age. So the first thing we're doing is we're renaming friends to connections. It's a simple change. It'll change what you see in the product, but really no difference in the way current friends worked. But then we're introducing something new.
And we're introducing this notion of trusted connections. And so with a [00:14:00] trusted connection, we're allowing for more expanded ways that people can interact with each other. And at the beginning, that's really about unfiltered, text chat and unfiltered voice. So
David: if I'm a 15-year-old and I have a trusted connection.
I should be excited, right? Yeah. Because I can say more on Roblox. That's right. Like, like that old, um, lots of hashtags and whatever I do is going to go away to some extent. Right. It's just much more
Matt: freedom of expression. Yep. And then that way it's a lot more fun to play on Roblox. Yeah. 'cause you can like talk to your friends.
You can be moving back and forth between different games. You can invite your other friends into it. And you really just hang out. So
David: it, it's exciting 'cause both of you all day long work on safety, but in this case it feels like you're working on both safety and freedom at the same time.
Eliza: Right. But it's really about what's developmentally appropriate, like a freedom that makes sense for 15 year olds.
Yeah. You know, when I was a kid, I got my own phone line in my bedroom. Yeah. Um, and that's, this is the ability to talk to your friends that you know, that you trust. Um, [00:15:00] and. That doesn't mean that we're pulling back on safety at all. Right. That that's right. Because we're still doing proactive detection for our critical harms areas.
We're still running our models in the background. So we'll intervene if we see something that's like truly serious. And also you can still report to us if you see a violation of our community standard.
David: It, it's really interesting 'cause it reminds me, I dunno if you saw the legacy of this very early on when we built the very first text filter, we, we were measuring both safety and freedom at the same time.
And as we advance the technology of the text filter, we were trying to advance both of them simultaneously. Right. And it was getting better all the time. And on the freedom side, not making mistakes that would interrupt communication at the same time. So, so when I say freedom and, uh, safety, it's like you're optimizing them both simultaneously.
Right. And we even
Matt: see like improvements in the platform today. Um. It's a little bit different, but when we have users who are using voice communication inside of [00:16:00] games, this, again, 13 and over, and they're playing games and they're talking to each other in the game, we subtly nudge people if we think that those conversations are getting a little bit edgy or a little bit violative.
And what we found is just by subtly nudging people, the number of abuse reports, the number of times people raise their hand and say, Hey, I think they're saying something, is. Maybe inappropriate is fallen by more than a half. And so we know that like, as like people become more civil and we can nudge them in the right direction, we actually see engagement go up because people are having more fun and they feel more comfortable.
And so it is about that balance between safety, um, and freedom. And it's really important that we always are working on both.
David: Well, I, I think there's two analogies to that. One is, um, the broken window theory. I mean, you go to. A place, you know, where unfortunately a lot of windows are broken, it's easier to pick up a brick and throw it through the window than you go to a place where you don't see that at all.
So I think that's, uh, super interesting. I do feel there's gonna be a time [00:17:00] when we, um, let researchers into Roblox and they take data in a privacy legally compliant way. And I'm optimistic their research is gonna say that these nudges. Are actually educational and that being a nudge, being a gentle nudge.
You keep playing just, hey, there's actually like a civility learning aspect to the product itself. There
Eliza: there's this concept of upstanders of people that say, Hey, that wasn't. Really very nice, or that wasn't really very cool. And that's what we're doing with our nudges. We're just saying, Hey, maybe cool out, um, on that.
And I think it really makes a difference on the platform for the civility of the platform
David: and, and it's really cool to have things like that built into the product and you have to do it in
Matt: real time.
David: That's because
Matt: I think one of our learnings is that. Yes, there's violative content behavior on a platform.
It's the same on every platform, but for almost all of those users who exhibit that behavior, they just don't know the rules.
David: Yeah.
Matt: You know, in the real world, we have tons of contextual clues about [00:18:00] like how somebody's eyes look at you, or like maybe it's like a sign on the wall when you walk into something or people know, like in school you behave a different way than you might in your own living room online.
Those clues aren't as easily discerned. So people just don't know the rules. And when we think about nudging, we think about, it's just about like telling people like, Hey, here's what's expected. Yeah. And most users hear that and then their behavior changes. And it really just up levels, like the civility and the engagement of the, of the platform overall.
Um, and it's a really interesting like research area that we're looking.
David: Okay, so if I'm listening in, um, you know, I've historically said when I talk about Roblox, we have to be safe by default for all ages, and so we have to filter everything. So, and obviously kids are mischievous and they'll try all kinds of things to get around things.
So now we're talking about giving more freedom. Of communication. We're talking about trusted connections and more freedom. [00:19:00] So how, how, how's that gonna work with maybe people who aren't age appropriate and wanting to have more freedom?
Matt: Well, maybe David, if you think about it, like when you sign up for just about every platform, every account that online, there's always a question, maybe it's a checkbox that says, how old are you?
Are you overthink? That's right. How are we gonna handle ages? Right. And you know, it's a, it's tough. I, I believe most people are honest when they hit those things, but there's always some people who will try and get around the system. And in the past, everybody's just had to take your word for it. That's right.
And so what we're really introducing now, or we're introducing is, um. Age verification through these age checks. And the way it works is we just take a video of your face, we ask you to turn your face to the right. To the left. We make sure that you're a real person, that you're actually alive. It's not a mannequin or Right.
You know, some photo you're putting because Right.
David: The first thing I would imagine is I'm gonna like print a picture of my someone and put it up there. Exactly. That's not gonna work, you're saying? Yeah, that
Matt: won't work. Okay. And then from there we can estimate how old you are. [00:20:00] Yeah, and I think, you know, most people who, you know, spend time, you know with kids and have kids of their own, you know that people's faces, they change as they get older.
David: Yeah.
Matt: And the algorithms are actually pretty good today and they can come back with a pretty high confidence estimate of how old you are, in particular when you're talking about somebody going from a teen into their twenties and thirties. The algorithms are pretty good.
David: Okay.
Matt: So we're gonna check your out your age and we can, we have pretty high confidence whether you're above or below 13.
It's not perfect if you're exactly 13 years old. I mean, imagine like, you know, when somebody has their 13-year-old birthday, it's not like their face suddenly changes. That's right. Um, so in those cases where we're not really sure, uh, we let users identify how old they are, either through like a verified parental consent or through showing their ID or something like that.
And at that point we have a much higher confidence that you are actually over 13 and then we can get back to what Eliza's saying and offer more age appropriate ways for you to talk to your friends, which is trusted connections. [00:21:00]
David: Right. And so highlighting, once again, this is, um, freedom with safety. So really exciting.
If I was a 15-year-old on Roblox, I'd be really excited so I can do this. And I, I think just what you shared with age estimation. Um, someday for 21 and up, we'll probably have ID verification for that. Yeah,
Matt: we'll do that. And, you know, the reason why we're focusing on the, the video, uh, means of doing this now is because for a lot of users who are in, you know, 13, 14 years old, they don't have IDs.
I don't think people are really comfortable sharing their IDs with everybody. Um, so this is a very, you know, um, low intrusive, easy way to check somebody's age. It also lets us implement some other safety features as well for trusted connections. So one thing we're doing is that if you have two 15 year olds and they wanna become trusted connections with each other and chat and hang out, great.
That's how a lot of people on Roblox become really close friends. At the same time, if you're [00:22:00] 15 and somebody else is, say in their thirties or forties, or even just anything over 18, we wanna make sure that those people know each other in real life before we let them become trusted. So you
David: can still be a trusted connection with your uncle or someone in your family.
Matt: Yeah. You just have to demonstrate that you know them in the real world and then it's totally fine.
David: And knowing in the real world, um, phone bump, QR code, those types of things. Yeah. Things like that. Okay. And
Eliza: I think it's so exciting 'cause you know, even a couple of years ago, this technology wasn't available, wasn't accurate enough.
And so that meant that we had no way on the platform of verifying that you were a kid, right? Maybe you had a driver's license or a passport once you were a little bit older, but that was really high friction and pretty inequitable, right? Because a lot of kids don't have that. And so this is really, the technology is enabling safety and freedom at the same time, and it's really cutting edge and really exciting.
Matt: With a hundred million users on the platform, anytime that there's an incident, which somebody is at risk, we take it incredibly seriously.
David: One of the policies we [00:23:00] sometimes talk about is this balance between under 13 safe. Tell us more about the policy team and what's it look like and what do you do?
Eliza: Yeah, so the policy team owns our community standards, which is our external, um, list of all of the, what's allowed and not allowed on the platform, but also we write and document all of the internal documentation that supports those standards, and that, uh, trains our moderators, our human moderators, and also our, all of our models.
So we're constantly refining that training and we're constantly updating with the trends of the day, and we have to keep up with the, the slang of the day and all of that. Um, and what we're trying to do is really mirror the freedoms and realities of kids and adults in the real world. Um, and so we also work with the product teams Yep.
To define who should have access to what features at what point in their lifecycle on the platform. And then what kinds of things might parents. Really be the right people to control. Right? And our, our youngest users we're defining safe. [00:24:00] Safety baseline is safe by default experience. And parents might go in and say, look, my 9-year-old is ready for this kind of content, so I'm gonna, you know, give them access to that content.
And I think, you know, as parents, um, we know that different kids are different and what's appropriate for that kid is different. And really for our youngest users, it's the parents that know that the best.
David: So that this has also been a, a very active discussion in the company because there's just such a wide range of parents.
And families, yes, out there, and it is been a very wide discussion around what do we do by default and give parents more freedom versus where do we go very free and then let parents by default make things maybe less. I think generally. We've had to take the assumption that not all kids have parents who are actively absolutely involved with them.
Eliza: Right? And it's overwhelming as a parent. There are so many different platforms that your kid is on. You have to understand the controls of everybody and everywhere. And so [00:25:00] we wanna have a safe by default experience for everybody, not just for kids. Um, and then give parents the controls they need to have insights into what their kids are doing on the platform.
And then to open things up, features and content as they see fit. And I think of this. You know, in the real world, like when as a parent, do you buy your kid a ticket to a PG 13 movie? That's right. When do you let them fly alone to see their grandparents? Um, when do you let them walk into town and get an ice cream?
All of those things are decisions parents are making all day in the real world, and we wanna give them the ability to make them on Roblox as well.
David: Now this gets even more complicated. It's, it's. Almost crazy to imagine there was a time when Roblox didn't have experience guidelines, you know, and start to designate certain content for certain people or types of communication policy.
This gets more complicated globally because just like in the movie business, different countries have different movie ratings. I think we expect in the 3D platform thing, the same thing. So how do, how do you see this policy thing [00:26:00] globally?
Eliza: That's right. Yeah. And different countries have different game ratings authorities and different expectations about the types of content that are appropriate for different, uh.
Age groups. Uh, and that's really cultural. It's really localized culturally. And so I think our long view vision is that we build a system that is adaptable and dynamic in every place that we operate. So for different ages, for different countries, for different types of content, we're able to dynamically rate and serve the right content to the right people in the right places.
Uh, and you know, we're continually building a system to be able to support that reality.
David: And I think generally, if I'm a parent, I think we're reasonably comfortable being transparent about our policies.
Eliza: Oh, not just comfortable. Like that's a baseline requirement. Yeah. Right. We need the community to be able to understand what the rules are.
You can't follow a rule you don't understand and you don't know. Um, and so we wanna be super transparent, have lots of documentation, and like we were talking about with the nudges, a lot of contextual information in the moment. Here's why this [00:27:00] is or isn't allowed. Um, go here for more information, but we have no interest in hiding the ball.
We wanna be super transparent.
Matt: And we wanna make that work whether you're seven or 17. That's right. And so when we think about the nudges, we have to understand more context. Like how old are you? What is the situation you're in? That's right. What you might nudge a 7-year-old with might be very different than a 17-year-old.
And it may be very different depending on where you are in the world and what's expected. So I really think like when we think about, you know, taking a platform approach and a systems approach to solving these problems, we're building something that is age agnostic and it's agnostic to like in the world you are.
And it's just adapting all the time to figure out the right thing to do.
David: Okay. So where, um. So where is safety and civility and freedom going on the platform? Maybe without giving too much away, like what do we have? What are we thinking about?
Matt: Maybe a good place to start is to recognize that safety is a, something that is never done.
As we make Roblox safer and we do better jobs of [00:28:00] like filtering text communication that might be bad, or filtering people who are trying to upload content that's bad. People just adapt. That's right. So it's like a real time, you know, tug of war between some of the bad actors and what we're doing.
David: Would you, Hey, can I ask you a riff?
We know there's a wide range of bad actors out there, right? Yeah. And we also have just incredibly creative young people side by side, true bad actors who wanna push the limits of our platform, right? So I think we,
Matt: we in, we internally, we start with the presumption that most people are good. Most people have great intentions.
They just don't know the rules, and that's why we have these nudge systems and things like that. Yeah. But I think there's a general rule of the internet, which is of any platform that gets to the scale that we operate at, we become a place where bad actors want to go. That's a very hard problem though, because there are so few of those.
It's like finding a tiny, tiny, minuscule needle in a ginormous haystack. And so [00:29:00] we've devoted a lot of energy into building algorithms to try and identify that type of anomalous behavior. Mm-hmm. And so what we do is we identify. Accounts that look like they are problematic for some reason or another.
And then we have a whole investigations team that goes and looks at each one of those accounts by hand, and they make these determinations that so far, like the AI isn't quite able to make the AI serves up candidates. But then we have people who, these are like professionals who come from lots of three letter intelligence agencies who are really good at discerning what they use.
That actor's intent is. And then we make decisions to pull them off the platform, and we report that to law enforcement. We intend to share some of that technology for how we're doing it with other companies.
David: That makes the CEO job really a thoughtful position because we're, we're talking about a, a. A couple things.
Increasing technology getting better and better and better. Starting to have to make [00:30:00] decisions around safety and privacy, which were very strong believers in. And it, um, I'm not gonna say it's ever gonna go there, but it's like reminds me of that Minority Report movie where, you know, we would probably never.
Make pre-crime, uh, predictions at Roblox. 'cause that might precogs that. Yeah. Precogs that might, you know, cross our respect the community. And we are responsible as, as well as legal and privacy constraints. But you can imagine getting to a point where with AI and behavioral analysis, we at least have signals of who potential bad actors may be.
It's something we
Matt: talk about as a team. Fairly often that in particular when you have kids on the platform, kids make mistakes. Yeah. Kids say things to get attention. It doesn't mean that they are, you know. Permanently.
David: Well, well, Matt, haven't we, without giving away any details compared like we think about what we did when we were 11, [00:31:00] 12, 13, 14, and 15 and yes.
Compare notes because we were both pretty creative kids.
Eliza: Yeah. I think it's also important to remember and on the policy team, so we work really closely with the Intel team that's doing these investigations and we have child development experts on our team and it's important to remember that that's actually.
Pushing the boundaries is actually a safe learning experience for kids. That's right. We want Roblox to be that place where you can sort of test the edges and understand what's safe and civil and maybe what's not. Um, and so it's actually like an important developmental stage of particularly the. The tween and teen years.
Um, and so going into these investigations and discerning what, who is a dedicated malicious actor? What is truly crossed the line and where can we educate our users and our community and say, Hey, that wasn't cool. Don't do it again. Yeah. Um, and, and you know, drawing that really fine line between those two things, it can sometimes be pretty, pretty close.
David: Yeah, I feel as a youngster I learned a lot going right up to [00:32:00] that edge. Right. You know, luckily I didn't get physically injured or with some of those things I was doing, but there was a lot of learning there. And I think, you know, I know when I was a
Matt: kid, like we went to the park or we went to the beach or like, we rode our bikes very far away.
Um. And that's where we learned to like push the boundaries a little bit. Mm-hmm. And I think what we see today is people push boundaries in online places. That's where kids are hanging out. You know, kids have much busier schedules than they did when I was a kid. Parents are much busier, but kids still really want to connect.
So we see sort of that like adolescence boundary pushing. We see that on the platform and we just have to be really respectful of the fact that these are kids who are learning at the same time. There are bad actors out there and the bad actors, we have to identify as quickly as possible and remove them from the platform.
Um, and so we, it's, it's a balance between those two things, and I think it's something that we as a safety team are constantly thinking about.
Eliza: And we're not thinking [00:33:00] about it alone. We have lots of partners with NGOs and other platforms and academics. There's just lots of resources out there. We're not inventing it, you know, in a silo.
We're working with all of these partners to really understand as much as we can about the context, the trends of the moment, um, and informing our policies and our product approaches. With
David: that. Yeah. It almost feels like the technology is gonna get better and better and better and we're, our job is gonna be becoming more thoughtful from a legal privacy policy, child development standpoint.
Of how to use that technology because it's, it is gonna be able to simulate a human looking over the shoulder and it is gonna be able to simulate a lot of other things.
Matt: I just don't think our job will ever be done. It will never be done. I think as good as the AI is, humans are incredibly creative. And, um, we have to take an approach where we just expect that to happen.
In fact, [00:34:00] as our systems get better, we see the behavior of users on the platform shift. So as our text filtering or voice filtering gets better to block certain types of content, we see people starting to speak in code. And so then we have to build the algorithms that can discern that code, and it's really amazing.
Like I've, you know, has things evolve. Just seeing the community evolve at the same time. I think it just speaks to human nature and the, the fundamental creativity of people.
David: Yeah, I mean, in the midst of our world right now, everyone trying to figure out where AI is gonna go, how smart it's gonna go, I would say it's an optimistic viewpoint to imagine humans in the loop forever, and human behavior in the loop.
As well. And, um, riffing on what, um, you were just mentioning, Matt, when you talk about trying to speak in code, that the evolution of that is us trying to keep people on Roblox because we shared [00:35:00] so much, uh, about our focus on trust and safety. And one of our big challenges is let's just keep everyone on Roblox and we're gonna get very good at that.
Yeah. Because there are other platforms that don't have the, I would say the same focus in that 13 through 17 zone.
Matt: Right. I mean, I think what we've seen is that our safety systems have improved. We see those dedicated bad actors understand that, and their motivations shift from trying to have conversations on Roblox to trying to get people to move to other platforms where safety systems are just not at the same bar as ours.
So one of our big focuses recently has been to try and identify when somebody's saying like, Hey, let's go somewhere else to talk. And you know, some stuff is pretty simple. Like if I say, Hey, Dave, what's your phone number? That's pretty easy to tell, but people are creative and they come with, with puzzles to share information.
So we're building AI that tries to understand those puzzles of what's happening. And um, I'm, I'm
David: optimistic in a few months, um, [00:36:00] you and I are gonna go on Roblox and say, let's try to share a phone number. Yeah. And I think we're gonna find it very hard to do that. That's right.
Eliza: And that's also a feedback loop, right?
So maybe if there's a dedicated, malicious actor talking to an, an innocent user, if something gets blocked, that user, that's a signal. Hey, that's right. That didn't, I don't think that was good. Because the ROBLOX systems are blocking it. Exactly. It's a feedback loop. Uh, and you know, if you just. If, you know, kids like slang evolves very rapidly.
Um, my team has to keep up with what words mean today and tomorrow they'll mean something different. Um, and we have to partner with the product teams on all of these, we call them bypasses, all of the ways in which the community, um, and some bad actors are trying to get around our systems and we'll just keep iterating and improving.
David: Now one, um, one thing where, one final maybe thing to think about, we're always trying to think about how we scale our platform and we put our creators ahead of everything and [00:37:00] trying to support them building businesses on our platform and having their own flair and their own vibe. What, um. So I can imagine even in a certain age group, certain creators might want to have subtly different policies.
And so I, I can imagine both of us have an all ages experience, but I might wanna have slightly even more restrictive. How are we thinking about that?
Eliza: Um, yeah. So we are thinking about building APIs and dynamic systems that let. The developers and creators, um, implement those kinds of different roles, which
David: would be amazing.
Eliza: Right?
David: So, so I, I always use the example of, hi, I've created Mermaid Island for everyone. The policies are the standard Roblox. Policy plus you, I don't know, can't say bad things about aquatic creatures or something like, yeah. Would we support that someday?
Eliza: You know, our developers are al already doing creative things to have [00:38:00] this be real in their games.
Okay. But yeah, I think we could support that.
David: So the market is pulling that
Eliza: too. Yeah. Yeah. Yeah. And
Matt: we, we imagine building a suite of APIs for the developers to be able to define their own policies. To be able to say, Hey, I have this in-game content creation. Will you help me moderate it in real time? Or, I have people who are interacting in a certain way that I'm not sure about.
Will you take a look at it? And we want to implement all of that stuff at a platform level. So these developers just have these APIs that they can call.
David: Well, Matt and Eliza, um, thank you for respecting the community. Thank you for just fully embracing we are responsible and like. Arguably two of the hardest jobs in the company.
You know, 2:00 AM I don't get the policy. I question, I, I get more like, Hey Matt, what's up here? So appreciate your leadership here, both of you in doing a, a role that is super hard, but I hope rewarding as well.
Eliza: Thank you so much.
David: Alright, cool. Well, hey, this is Dave. Thank you for joining us and we [00:39:00] look forward to next time when we get together on tech Talks.
Thanks.