How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
KIMBERLY NEVALA: Hello, everyone. Welcome to Pondering AI. I'm your host, Kimberly Nevala. And today, I am beyond pleased to be joined by Vaishnavi J. Vaishnavi is the founder of Vyanams Strategies which you may also know as VYS. Her team partners with corporations, civil society, and governments to design online products that safeguard the being of children and teens. She brings to that work a wealth of experience from the trenches of companies such as Meta, Instagram, and Twitter.
Today, we're going to be talking about what awaits kids in an AI-enabled world and why a safety-by-design ethos benefits both young people and companies alike. So with no further ado, welcome to the show, Vaishnavi.
VAISHNAVI J: Thank you for having me. I'm really excited to be here.
KIMBERLY NEVALA: Yeah, these are my favorite conversations, I have to admit. Now, given the high visibility of risks confronting children, young adults, really everybody, on social media today, it's pretty easy to hyper focus discussions such as this one on social media alone. Why do you think that is too narrow a view when we're considering how to safeguard AI-driven experiences for younger people online?
VAISHNAVI J: My view is that the term "social media" has become shorthand for the way children are social online. And the reality is that their social online in a number of ways beyond your traditional peer-to-peer networking platforms. They are playing games. They are talking to AI chatbots. They're looking for and buying things online. They're looking at recommendations that their friends or their favorite celebrities are endorsing. So there's a wide number of ways in which they are being social online. And I think that's where a lot of the concerns come in.
But to restrict that conversation to just social media would, for example, remove a lot of important conversations around how they're being social with AI companions, or how they're being social in games, which sometimes have AI-powered experiences and other times not. So I think it's really important for us to widen our lens, and recognize the shorthand that we've all been using, and kind of shift some of the language of the next few years.
KIMBERLY NEVALA: Does that also then widen the sphere of companies and organizations that really need to think about safety-by-design if you were for their applications? Even though they may not traditionally consider themselves purveyors of services that are for kids.
VAISHNAVI J: Absolutely. And a big part of that is because even though these services may not have been designed for children and teens, they have a lot of children and teens. We just see an enormous growth in the number of young people who are using online experiences across all those sectors that I mentioned.
And so if you do have young people on your platform, you have two decisions to make. You either decide that you don't want them there anymore and so need to think about how to make sure they're not accessing your services. Or you recognize the value that they bring to your services and are designing age-appropriate experiences for them. And once you start talking about age-appropriate experience cases, you cannot avoid talking about safety by design. The two really go hand-in-hand.
And the whole premise of safety-by-design says, as you are building your products and policies, make sure that safety and privacy are really built in every stage of the process. Which is exactly what we're talking about when we talk about age-appropriate design.
KIMBERLY NEVALA: And when you say "safety," I presume we're not talking about safety in the sense that AI doomers or existentialists sometimes put out there. When you say "safety-by-design," what does safety encompass?
VAISHNAVI J: In the AI context, I think you've got your very severe harms that we always talk about: the risk of children being exposed to grooming, to their images being exploited for child sexual abuse materials, for different types of very severe exploitation. We also have severe harms, like the potential for suicide or self-injury, the potential for developing a life-threatening form of self-harming like eating disorders, or to be really driven to desperation in an incidence of very, very high severity harassment or bullying.
Those are some of the immediate things we think about when we talk about safety in an AI context. But there's a much wider world of safety when we're talking about young people and that really relates to the emotional and mental well-being when they're engaging with AI. What are their levels of anxiety and depression? How much support and community are they actually finding from AI solutions? How do we make sure that they're not being exposed to content, or to behavioral patterns, or feature sets that would really be inappropriate for someone of a certain age? Those are also a part of the safety conversation. And sometimes, we just look at one set. We just look at these very extreme harms without recognizing the full spectrum of harms and opportunities that we should be considering.
KIMBERLY NEVALA: I want to talk about what some of the basic guardrails or guidelines for age-appropriate AI design might be. But I'm wondering if it might be helpful in an expository way to say how might the experiences - particularly something like the recent "ripped from the headlines" story that's all around today about the child who recently very sadly committed suicide and was in conversation with a chatbot - how might that experience have been different or would it have been if a safety-by-design approach had been applied?
VAISHNAVI J: That was a really awful, I think, incident. And safety-by-design, I think, provides a number of potential avenues through which something like this could have been averted.
The first thing to think about is, what are those pillars of safety-by-design? We developed an age-appropriate AI framework at Vayanams Strategies a number of months ago for a client. And we're able to release an abridged version of that publicly. And we really talked about four main pillars of safety-by-design. We talked about the need for protecting privacy and expression, to really ensure that the models are fair and equitable, that the experience promotes safety and well-being, and that there's transparency and accountability.
And some very tactical examples of what's in this framework - and this is publicly available - is reminding children repeatedly that artificial models that they engage with are not, in fact, human. Now, that really should be a baseline. But if you look at any AI , most AI chat bots today - I don't want to say any -- those reminders are few and far between. And that's really important because we know that children and teens form emotional relationships and attachments with a variety of things in their lives, including technology.
By the way, this is not new to AI. We've seen this with assistants, like Siri and Alexa. That research as well out there and well-established. So this isn't like brand new information that companies need to incorporate. We have pre-existing work to show that this is important.
What I find frequently at VYS is that our role is to really be that of a translator, to take all the research, user expectations, regulatory expectations that's out there and then embed with product and policy teams to implement that in practice. Because there's so much out there and it's incredibly overwhelming for any product manager to think about child safety. Particularly when they're building complex systems like AI systems.
KIMBERLY NEVALA: As you were speaking, it also brought me back to some conversations we've had with folks like Giselle Mota and Yonah Welker talking about accessibility, inclusivity. And their argument or hypothesis - and I think it's probably more than that, it's a proven fact at this point- is that inclusive design makes for better applications for everybody. They're more intuitive. They're easier to use.
You could probably argue that the kernel of thought around even behind some of our current AI-assisted interfaces with LLMs started out with accessibility in mind.
So I'm wondering if it's perhaps somewhat counterintuitive but does a child or a child-aware, or age-appropriate design or safety-by-design ethic also ultimately benefit adult users? Not in the sense that we're trying to infantilize them but that it also raises issues that have resonance and should be considered - even if they need to be considered maybe at with different weighting - for healthy adult experiences as well?
VAISHNAVI J: Yeah, I know. I try and take a step back sometimes when we're talking about child safety and say, well, why is child safety even important? What is distinctive about children as a demographic that makes us think about child safety and well-being? And there's two components to that.
One, we recognize that children have a certain state of vulnerability in our society, and that most systems and designs don't necessarily account for them. But two, that they have unique and distinctive needs and that these are things that we can actually support and enhance in their experience of a product. And those principles are not limited to children. Those principles can be applied to marginalized communities, to communities that don't get as much of a voice online, to folks with different needs who may not see themselves reflected in the systems that are designed.
If we just take a step back and look at those two principles, that's actually pretty universal. And I think that does make for a better design. So I completely agree that inclusive design isn't just really about designing for children. It's about designing for a variety of people with different sets of needs.
KIMBERLY NEVALA: Now, there's sometimes a tendency for us to think about these systems and say we should just keep kids off of them. So keep them away from these platforms, keep them off the products. And certainly, companies have the option to screen certain services and products to apply age limits and so on and so forth.
But you've also argued that a lot of these kind of current approaches really outsource responsibilities that really deserve to be lodged or reside with application developers or digital service providers. Can you talk a little bit more about that outsourcing phenomenon and why it may seem like an obvious solution on the surface, but when you look under the covers, it's perhaps working against our intent?
VAISHNAVI J: Yeah, I think it's really interesting that over the last 25 years, we've had what I describe as the tech libertarians: the ones who say, don't do anything, the internet is fine as it is, especially in the context of children. The internet is fine as it is for children. You don't have to touch it. And then the folks who think that tech is at the root of all evil, or are the ones who think that, well, we just need to eliminate tech from children's lives.
And what is interesting about both these extremes is that they absolve platforms or companies of any sort of responsibility in design. We have this idea sometimes that it's really difficult to design solutions. And we're the ones who designed it. We've designed these solutions over the last 20 to 30 years. This isn't a Stone Age invention. And so we are capable of designing much better experiences. And by the way, we have.
If you look at the improvement in child safety, even on social media platforms or gaming platforms over the last five years, they far exceed the safety and privacy protections that were in place in the 20 years before that. So we are extremely capable of designing great experiences for kids. And so I think that we shouldn't be absolving ourselves, as a society, of keeping children-- of designing better for them.
And then the final piece is, these technologies are in every part of our lives. So if we're saying that we're designing technologies that are going to be in every part of our lives that are not accessible to children, what happens when those children reach adulthood? Is there this magic switch that's going to go off, and suddenly, they got to learn how to use an AI chat bot or build foundational models? We want them to - we're talking about future adults here. We need to make sure they are also a part of these systems and are graduating through these systems just like they graduate through other processes.
KIMBERLY NEVALA: And you also sometimes use this analogy to baby food. Similar to a previous guest-- and I can't remember who it was, so I apologize to them in advance - who said, if we all have to self-regulate, even for parents or schoolteachers, have to look at every single app and every single way that kids - particularly probably teens and younger adults, but younger kids are strangely freakishly good at getting into places that they should not get into online, on a phone, or a tablet, or whatever that may be.
She said, if I had to actually be in a grocery store and assess the supply chain of every single piece of produce or product I pick up, I'm never going to get out of the grocery store. And you make a similar analogy to baby food. Can you share that with the audience? Because I think it's just a really easy -- it just exemplifies and demonstrates this concept so well.
VAISHNAVI J: I think parents and caregivers more broadly, whether it's parents, educators, guardians, of course have a responsibility to how they curate experiences for the children in their lives. Absolutely.
But the baby food analogy is, well, sure, I'll make sure that I pick the right food for my child. But I assume when I go into a store and buy baby food that there's no sawdust in there. There's no poisonous chemicals in there. That's a fair assumption, that someone established a baseline level of safety and developmental appropriateness of the product for children. And then I went in and chose between the different options that are available to me.
So I don't mean to absolve caregivers, or guardians, or parents of any role in their children's development. But it is not realistic to expect that they be responsible for a baseline level of safety and age appropriateness within products.
KIMBERLY NEVALA: And this, as I understood it too, with some of the - I don't know if it's fair to call it pushback - that you had when the surgeon general was talking about just slapping warning labels on social media. And you argued that it, A, may not be effective, and, B, is not a substitute for both responsible design or safety-by-design and appropriate content moderation. Can you talk a little bit about that?
VAISHNAVI J: I mean, I would love to see what the label says. How extensive exactly is this an essay that we're attaching to products? And I think that warning labels are really tempting, but if you look at the history of warning labels, what do they do? They warn consumers about a product, and then it's a consumer's decision what they do with that information.
But that still puts the responsibility on consumers. Now, we don't input warning labels on products that are fundamentally unsafe for consumers. That are net negative, like completely high severity, high prevalence, unsafe for consumers. We don't do that. We recognize that we have, the government and industry has, a role to play in regulating access to those products.
So I think warning labels are very tempting. But ultimately, what they do is they just sort of shame platforms. There's no real incentive to change designs or update your feature sets or policies.
We did a project a couple of months ago on how most content moderation policies today are irrelevant or actually quite useless for children and teens, because the ways in which they perceive harm are so different from the way we as adults perceive harm. And that requires companies to update their content policies. Not users to simply think about -- it's not up to the user. Users can't control content policies. Companies can.
KIMBERLY NEVALA: And when you say they perceive harm differently, or they perceive the experience differently, are you talking about young people or about the platform providers themselves?
VAISHNAVI J: Young people. So I'll give you a really interesting example here.
In partnership with researchers, we looked at how young people perceive violent threats when they receive them online. And young people who have been playing games or have been in social voice settings for a long time are really used to hearing them. To the point where neither the young person issuing the violent threat nor the person receiving it perceives it as such. And yet because it is such shocking thing for us to hear as adults, we impose the most severe penalties.
Now, I'm not suggesting that we shouldn't. But in the process, we're missing out on all these coded implicit harms and implicit bullying that's taking place that our policies just don't account for. So posting a photo of four people, but tagging only three of those people in it, where the fourth person is deliberately being excluded. We hear that young people actually register that much more severely as a harm than the example we previously talked about.
But our content policies right now have nothing in place to address that. And it's hard to address that. It's hard to build a policy around that. So that's what I mean by, we need to be thinking about how our products are serving children. And I think safety-by-design is a really good way to dig into that at every step of the process.
KIMBERLY NEVALA: And I suppose there's also the complication here. Where we can start to think about how expectations for what people might be able to say to you online. And because I play games in any other context, I might be less bothered by language someone like myself might find really brutal, or harsh, or overly violent. But I wonder, is there any research that you're aware of lately that also then talks about - now this is getting a little bit off safety-by-design, but I can't help myself - about the implications for how we interact with each other and what social expectations are offline as well as in those online spaces?
VAISHNAVI J: Yeah, I mean, that's a great question. I think there is research out there on it. We haven't looked into it yet because it's not come up in the course of our work. I was just having a conversation yesterday with a group, some people were clients and some were just experts in the field. And we were just talking about how we're headed for a breakdown of the social contract.
And we, in many ways, have experienced a breakdown of the social contract over the last 10 to 15 years. But if we are perceiving harm fundamentally differently across different age groups, then I think there's real work to be done to bring each other along. So to help older folks understand how young people are perceiving harm and help young people understand how people they're interacting with who might be older than them are perceiving harm. That's really important to have that intergenerational conversation. These kids are going to be in a workplace one day. They are going to be in college interacting with a variety of other folks. How do we help them understand what is acceptable interactions in this context?
So I do think there's a real role for digital literacy to play there. And in unpacking some of those differences between different generations and different cultures' perception of harm and then creating ways for folks to understand one another better.
KIMBERLY NEVALA: And as you said, maybe not a net new problem, but one that is increasingly important as so much of how we interact moves online or is influenced by what we see and how we interact online. Now, I think it's maybe, maybe it's just unreasonable, but it's also unlikely that organizations are going to self-govern themselves to the extent that may be ultimately required here. So what is the work to be done to ensure that we're establishing a solid foundation that everybody who is developing or delivering these kinds of products and services understands and is held to a reasonable standard? And who needs to do that work?
VAISHNAVI J: I mean, I think it's very encouraging that over the last decade, we've seen civil society and government come into the conversation. Because I think from a company's point of view, it's actually not that unreasonable that companies wouldn't do this on their own. If there's no socially agreed or societally agreed upon standards for what the best way is to protect children online are, what are they supposed to do? And how are they supposed to determine what the best practices are? They're not necessarily experts in child safety or experts in youth well-being.
And I think we also talk about this frequently in the context of the biggest players, who certainly have the resources to hire or contract with such folks. But most companies out there are not Googles, and Instagrams, and TikToks, and Snapchats. They're smaller, medium-sized platforms that are figuring out how to do the best they can with very limited resources. So I actually think it's valuable for government and civil society to come into the conversation and help establish ground rules for what entering this market should look like.
I think where I feel a little disheartened is I think we had a lot more conversation and dialogue amongst these different groups a few years ago. And over the last few years, my personal take is that I feel like everyone sort of retreated into their corners and is like ready to fight. So I think we're in this very adversarial position with one another when, really, there's a real opportunity for collaboration. And I think particularly about the Trust and Safety teams at companies that are very frequently aligned with external civil society and what the best practices should be. I think there's real value in having those groups talk to one another and figure out how we can support one another in advancing these goals within our organizations.
KIMBERLY NEVALA: I wonder, though, if this brings up the -- I don't want to say inevitable because I actually don't think it needs to be inevitable. But certainly, there is often a butting of the head between commercial or corporate imperatives and incentives and things that we might, for instance, even outside of this conversation, bring up when we're talking about ethics more broadly or responsible and trustworthy AI.
Are there benefits that companies might be overlooking? So they could very much be seeing this as an additional something to do. They may be underestimating their liabilities, so maybe we can talk about KOSPA in a minute as well. But are there actual benefits companies overlook when they're not considering safety-by-design and considering children and how young adults and children might perceive or interact with their systems up front?
VAISHNAVI J: The biggest business advantage that I think companies sometimes overlook is that most people want to be safe when they are engaging in online experiences. And that feeling will keep them committed to your platform for a longer period of time. It may not look like longer sessions of engagement. Each session might look shorter because you're building in safety-by-design, or you're urging them to get off the app if they've been on it for a while.
But over the long run, you are securing customers and users for life. And that is really difficult to quantify but it's also very difficult to ignore. So that's something -- that's one thing I think companies sometimes overlook. We'll have conversations, for example, around age-appropriate design with companies that have come to us looking to understand how to do this. And they're always very pleasantly surprised when we point out that, actually, their 18-plus users frequently don't want to be in the same spaces as kids. They're not trying to be in the chat room with teenagers or with 10-year-olds.
And so thinking about age-appropriate design and what rooms or environments are suitable for different age groups is an active business asset as well. I agree that that's not the most intuitive logic. But I would recommend people talk to us and look at some of the work that we've been able to do with clients over the last year because it is true. There's a lot of value in creating those experiences that are appropriate for different age groups.
KIMBERLY NEVALA: This is probably incredibly oversimplistic, but I go back to folks like Baroness Kidron and the 5Rights Foundation and talking with her about some of their age-appropriate design code. And one of the things she said that is just really simple to be able to do is to ask yourself, if this was happening in the"real world," would we think it's OK that people of a heightened age are talking with, or soliciting, or trying to befriend children on the street? Now, we would think that was really odd.
And so you get into this conversation sometimes about, well, maybe the norms and the environment are different. But there are certain things that I think if you looked at the analogous interaction or experience offline, non-digitally, would that be OK? It starts to spur some different questions perhaps.
VAISHNAVI J: I think that's fair. I think once we get into a competition of analogies, you can really find the right analogy for whatever you're looking to discuss.
KIMBERLY NEVALA: For whatever your point of view is.
[CHUCKLING]
VAISHNAVI J: Exactly. I mean, the English language is so full of wonderful analogies and expressions. So yeah, but I do think that there's certainly something to that. Looking at what offline behaviors are like and seeing whether we want them to be reflected in online experiences is certainly a valid point.
It is also important to consider that online experiences are designed to be different. There's a certain breaking down of barriers that we've seen just be incredibly valuable over the years. Online experiences have given people who would never have a voice or a platform - especially young people who would have never had a voice or a platform - it's given them huge places on which to advocate for their causes and what they believe in.
So recognizing the nuances of what makes the internet and online experiences great is also really important, especially as we head into, or already in the middle of, the season of AI, as I like to call it.
KIMBERLY NEVALA: Are there any particularly striking differences or gaps that you've seen as you talk to young people about what their expectations are? Both for what experiences they would like to have, what experiences they would like to be protected from versus how we as older adults may perceive their needs?
VAISHNAVI J: Well, one of the things that I'm very quick to always clarify is that we're not researchers. We're not PhDs. And I'm really, I mean, gosh, my parents would have loved if I had done that. But no, not a PhD.
What we are is product and policy specialists. So we really rely on the incredible work of researchers, academics, civil society who do that work of engaging with children. And we will often partner with them to understand what young people are going through. So this is really their work that we very humbly get to borrow and build off of for companies.
I think we sometimes, especially in the United States, take a very paternalistic approach towards young people. And even frankly, I always find the term "young people" so paternalistic. I'm like, "the youths" or "the young people." But they are frequently extremely aware of some of these harms. If you look at their if you look at their resistance to misinformation, it's actually higher than some other age groups that you would think of. If you look at their resilience to bullying and harassment, it's higher than some other age groups that you would think of. So it's really important that we not build products and policies through the lens of what an adult thinks a child wants, and instead truly center them in the conversation.
I've been really impressed by the youth activists and digital responsible tech activists who grew up online and are now saying, hey, this is what we want to see from the internet. And if you think of the AI 2030 code that a number of youth activist groups have signed onto, that sets out very clearly a vision for the future that's informed by people who have experienced the internet fully, who grew up online. And I think it behooves us to listen to them.
KIMBERLY NEVALA: It is interesting, though, that they're probably younger adults, but we're actually still getting their perspectives because they've now been through the experience of growing up online. So they're not actively in the midst of growing up online, and so there is a…it's not a balancing. But I think we need all of those inputs, and then we need to also take into account where they are in their growth, and what's reasonable for them to be able to articulate and what's not.
And I think that is where a lot of this,, when it comes to age-appropriate design AI and otherwise, gets really, really tricky. Because it's so easy to veer, as you said, into a perhaps paternalistic view. And on the other hand, we know that especially for younger children and teens, they may not have some of the wider perspective, which doesn't necessarily make them wrong. But it certainly means we have to measure some of those inputs differently.
VAISHNAVI J: Absolutely. I think bias is baked into any perspective. I think where we sometimes falter is that we're very quick to realize the biases of children and teens but not so quick to realize our own biases as adults. And so I think that's where we need to do some calibration, as a PhD researcher would say, in terms of where our views are coming from and what our perception is, and where is it. Is it truly the case? Is it actually what's happening? So I think I certainly suffer from that. I have to check myself all the time. And so yeah.
KIMBERLY NEVALA: Very interesting. So I'd like to talk quickly about the evolving regulatory landscape. We recently have seen the Kids Online Safety and Privacy Act. I think it's KOSPA now. Can you talk a little bit about how that might change the game, or if it will, for organizations who are putting out digital products and products and services? Which is pretty much everybody at this point, big or small. And if there are any particularly novel requirements coming out of that particular legislation?
VAISHNAVI J: So KOSPA is currently - it's passed the Senate with really resounding support, 91 to 3. It now sits in the House.
KIMBERLY NEVALA: That's amazing.
VAISHNAVI J: It's pretty incredible, right? 91 to 3. I actually like gasped a little when I saw that. I wasn't expecting that kind of a support. It now sits in the House. The House has passed its own version, or has introduced its own version of KOSPA, which actually removes the duty of care obligation.
And that's really this obligation that companies and industry has focused the most on. Because it effectively says that companies are responsible for reviewing their design processes and making sure that they don't exacerbate some pretty serious harms, like anxiety, depression, bullying, and harassment. The duty of care is probably the most interesting and contested part of KOSPA. It's not clear if it will survive the House.
We're not political analysts and don't try to be. We're in a very interesting position, I think. I've been thinking about this over the years. And I'm going to go on a tangent on this, where over the last year, I've been thinking about how we're in a very interesting position because we are the people in the middle. We are not public policy advocates. We're not pushing governments to do anything. We're certainly not in the camp of libertarian "let's just kind of let companies do anything." We're very much in the space of, this is what research, and data, and young people, and parents, and caregivers say is the best experience for young people online. We want to work with companies to help them achieve that.
And that puts us in this interesting position where we don't have a public view on KOSPA. But we do…I published a piece with Tech Policy Press talking about what it would mean in practice. Whether or not you agree with it, this is what it would mean in practice. So that's my little tangent.
KIMBERLY NEVALA: So I will refrain from then asking you opinions on duty of care and why companies might - or even in the House -- this is getting this heightened sense of pushback. I could probably opine on that as well, but I will not. That being said, you said this actually does not impose, but it will create some new responsibilities if that duty of care requirement is maintained. How does that change organizations' practices and processes?
VAISHNAVI J: The first piece that's really going to be impacted is these are some pretty severe harms we're talking about. And most companies do not have in-house or even contracted people with expertise in addiction, or mental health, or depression. Particularly not small and medium-sized companies, if we think about who form the bulk of the ecosystem. So I think there's going to be a real move towards making sure that they have that kind of information in place.
And then the other piece that they're really going to need to think about is, well, how do they establish duty of care? How do they establish safety-by-design in their processes? There are a number of ways to do it. We've been working with some companies to help understand how some of their internal processes might need to adapt and evolve.
And we find that in some cases with some departments, actually some of the changes are pretty minimal. In particular because a lot of these responsibilities are already outlined by a number of European regulations. So if you've already got a presence in Europe, or if you've been thinking about how to serve your customers there, you already know what you need to do.
In other cases - in particular, we find with product development roadmaps - there's going to be a real need to incorporate that safety thinking into the road mapping process. And this has been… road mapping season kind of just got over for most companies although a couple are still trickling in. And we spent a lot of time over these last two months helping companies understand how exactly to think about their product road maps for 2025. So those are the places, I think, that we're going to see a lot of change and where they're going to need a lot of help.
KIMBERLY NEVALA: And if I understood that correctly, what you're talking about there for companies is, rather than somewhere down the line - and very often, this is relatively far down the line, where we're doing legal or compliance checks, maybe that's at the point where we're trying to define requirements, or even after we're trying to verify that the product or service operates in a manner that we're comfortable with. That in some cases, really, we need to be considering what the requirements are and how this might actually change the profile of the products and services that we design. Is that correct?
VAISHNAVI J: Absolutely. Yeah, so at the point of deciding, what am I going to -- I mean, road mapping is essentially asking, what am I building for the next quarter or for the next year? At that point, understanding what the regulatory, but also what the user and civil society expectations are of you, what advertiser expectations are of you. And then looking at what you want to build for the next year and how that could be impacted by some of these changes, I think, is really important.
Safety-by-design essentially says that you need to have the consideration of safety baked into every bit of your process. And sometimes companies interpret that as being further down the line. But road mapping is a part of the product development process, so it needs to be incorporated there as well. And it can be really tough to do if you don't have any youth product expertise. Most product managers are not learning about child safety, or suicide and self-injury, or mental health issues in their coursework. And that's where we come in.
KIMBERLY NEVALA: Does that also, as we move forward-- I know I'm going to get the name wrong because I think it's changed a few times. In the UK, they have the age-appropriate design code, which sets out - or was intended to set out - some guidelines and standards so folks don't just have to make it up. So I'm interested in two things.
One is how does the general profile of KOSPA compare to other global regulations in the same space? And then is this also exposing a gap in the sort of supportive infrastructure, if you will, around guidelines and standards that would allow organizations, particularly the smaller organizations - although I think it applies to all - to be able to then execute against this with some degree of confidence?
VAISHNAVI J: So I think, age-appropriate design code, which is now, I believe, the children's code and KOSPA share a lot of similarities. I think that's been the model for a lot of legislation in the United States, including the California version that I believe is still stalled in the courts. And so I do think they share a lot of similarities.
The supportive infrastructure piece is a really important one, and it's one of the reasons why I started Vyanams. I previously used to lead youth policy at Meta and Instagram, and before that did similar work at Twitter and Google on child safety and youth. And those are big companies which absolutely had the resources to think about interpreting what these regulatory standards mean in practice. Most companies don't have that, and I think that's where we come in.
And so it's been really interesting to me. I mean, we're about 11 months old, so we're pretty new. But it's been very interesting to me the kind of panic that we sometimes hear in people's voices when they're talking to us and trying to figure out what to do. Because ultimately, most of these companies are very committed to doing the right thing. They want to do the right thing. They're not trying to do something that's intentionally bad or intentionally unsafe for children.
But the regulatory changes are happening at such a quick pace, and user expectations are moving beyond those at such a quick pace, that they just don't know how to react, and they don't know where to begin. So like with the children's code, or with a lot of DCS requirements, or even with the online safety acts that have passed in many countries, we get a lot of questions from PMs about, so what do you want me to build? Just tell me. Tell me what you want me to build, and I will build it. [CHUCKLING] And we're like, OK, fair. Let's actually sit down with you and build it. Let's actually figure out what this means in practice. And so that's been really interesting to see.
KIMBERLY NEVALA: Some of this - perhaps some of that fear and panic is, yes, it's probably a liability issue. I imagine, too though it goes back to something you said early on, which is a lot of companies don't consider - which they almost have to do by definition, for better or worse, in the digital environment - younger cohorts to be their key, target customers or those they serve.
And coming back to that issue, which it almost doesn't matter if that's your target cohort. They are going to be a cohort which may intercept your product or engage with it. So just building them into those demographic profiles as well is probably just a basic starting point. But very, very different, I would imagine, for most companies today.
VAISHNAVI J: I think it's a big mindset shift for companies today that are realizing that accounting for the youth experience is now table stakes and really important even to enter or exist in the market. That's a very different state. If you think about it, it's a very different state from how we've allowed online experiences to operate for the last 25 years.
So I have a lot of sympathy and a lot of genuine support for companies that, until now, had been operating fine for the last 10 years. And suddenly in the last three, four years have been like, wait, I need to build for children also? A, I don't want to know that there are children or I don't know that there are children. And B, I don't, no, this was not meant for that. This product isn't meant for them. And so I do feel a lot of empathy for companies that are in this position.
KIMBERLY NEVALA: So with all of that being said, perhaps we can wrap up with some thoughts from you on, if there's one or two overarching misperceptions, misconceptions, or just erroneous assumptions that are out there relative to this space, what would those be? And then what mindset or working hypothesis should companies be replacing that - or individuals be replacing that - with to be able to address and thrive in this emerging world?
VAISHNAVI J: I think the biggest misconception is that safety-by-design or age-appropriate design is going to be a blocker to growth and innovation. The opposite is actually true.
Good, innovative products that are safe and appropriate for ages are actually an incredible driver of growth. That's something that I strongly believe in. I think the market has significant evidence to support that over the years. And really needs to be a misperception that we just eliminate from our minds because it's just not true. You can absolutely build safe, age-appropriate products for children and have it be an engine of growth.
And I'll wrap up with an anecdote from a conversation I had about maybe four months ago with a product manager who came to me with a new product that he was looking to build, and said, well, I mean, I'd like to build this, but I suppose I have to figure out how to keep the kids off of it. And I was looking at the product. And I did all the research. And we looked at how the product was working. And went back and said, no, actually, it's fine for children to use this. You need to make like two or three tweaks to the product, but it is actually perfectly acceptable for children to use this product. There's no harm. You're good.
And there's just like simultaneous -- I think we assume as a default that, oh, we're going to have to keep kids off. It's always just like, it's just not possible. But we absolutely do not have to do that. We can absolutely have great experiences that are age-appropriate, innovative, drive growth, all that good stuff. They are not in opposition with one another.
KIMBERLY NEVALA: And so actually you have a bigger spectrum of opportunity if you are accounting for more ages and stages, as they say, in the educational field. Well, this has been fascinating. Really appreciate all of the insights. And maybe we can have you back again in a little while here and see how this is all working out in practice. We will also provide links in the show notes to some of those really important and helpful resources that you have provided. So thank you so much for sharing your time and thoughts with us today.
VAISHNAVI J: Thank you so much. It was really great to be here.
[MUSIC PLAYING]
KIMBERLY NEVALA: Well, to continue learning from thinkers, and doers, and advocates like Vaishnavi, please subscribe to Pondering AI now. We're available on all your favorite podcatchers and also on YouTube.