Robot Unicorn

Is AI a helpful tool or a 'Terminator' situation waiting to happen? In this episode, Jess finally gets to interview Scott as they tackle the complex world of AI safety for kids. They break down the different types of AI your child interacts with daily and reveal the hidden psychological risks. This episode will help you feel more empowered and less anxious about guiding your children through the new digital landscape. 

Get 10% OFF parenting courses and kids' printable activities at Nurtured First using the code ROBOTUNICORN.

We’d love to hear from you! Have questions you want us to answer on Robot Unicorn? Send us an email: podcast@robotunicorn.net. 

Credits:
Editing by The Pod Cabin 
Artwork by Wallflower Studio 
Production by Nurtured First 


Head to nurturedfirst.com/bodysafety to learn more about our Body Safety & Consent course!

Creators and Guests

JV
Host
Jess VanderWier
Co-Founder and CEO of Nurtured First
SV
Host
Scott VanderWier
Co-Founder and COO of Nurtured First

What is Robot Unicorn?

Join me, Jess VanderWier, a registered psychotherapist, mom of three, and founder of Nurtured First, along with my husband Scott, as we dive deep into the stories of our friends, favourite celebrities, and influential figures.

In each episode, we skip the small talk and dive into vulnerable and honest conversations about topics like cycle breaking, trauma, race, mental health, parenting, sex, religion, postpartum, healing, and loss.

We are glad you are here.

PS: The name Robot Unicorn comes from our daughter. When we asked her what we should name the podcast, she confidently came up with this name because she loves robots, and she loves unicorns, so why not? There was something about the playfulness of the name, the confidence in her voice, and the fact that it represents that you can love two things at once that just felt right.

Welcome to Robot Unicorn, hosted by my parents, Jess and Scott.

I hope you enjoyed the episode.

Are you ready for this, Jessica?

I actually am not ready for this topic, to be honest.

I want to learn about this topic.

Okay, so today's discussion is on AI, specifically AI safety for kids.

I just want to preface this whole discussion with we are talking about AI today, but it's very new.

So

Honestly, I feel like some of this or a lot of this information may go out of date very quickly.

So if you're listening to this episode like a year from when it's released, it's possible I like maybe some of this will stay quite

similar and some of the AI that we'll talk about today will likely stay relatively similar so some of it is okay but I think

others were just in the infancy stages of this technology.

So it's it's kind of hard for even like a lot of the research I was looking into, it's pretty new, right?

So

This is going to be a less of a technical discussion, I think, and more of just explaining kind of what the different types are and then us having a conversation about how do we keep our kids safe when they're using it.

Yeah.

I was just having this

visualization if we continue doing this podcast for let's say 10 years and 10 years from now we revisit this episode how different things will be even then compared to now.

Like AI is so in its infancy and we're already experiencing the impact it's having, good and bad, for children and for us.

So just imagine ten years from now having this conversation, like it'll just be so different.

So I was just kinda smil that's why I was smiling.

I was like, wow.

And I will say legitimately some of the top podcasts out there

actually use AI and just write up a script and they have the host's voice, you can't tell the difference.

Like one of the books I'm reading, audiobooks I'm listening to right now is

at least in some part narrated with AI voices.

And it's pretty hard, like you honestly can't tell the difference.

Yeah.

It's wild.

Quite crazy.

Yeah.

To start us off, I actually I think many parents who maybe don't have a technical background of any kind will picture

Or hear AI and picture something completely different from what it actually is.

So I was just wondering, when you think of the word AI, what are the things that you think visualize in your mind?

I mean I feel like I'm a lot more well-versed in AI now than I used to be.

AI, artificial intelligence.

Nice.

Right?

I think I used to think of AI as like this like, I'm not even going to get this right.

R2D2

I mean that has Yeah it would be Star Wars, yeah.

Star Wars robot.

Like this like robot that would be like telling us what to do and like everything.

I've always been terrified of AI from the day I learned what it was till today.

It just scares me.

It still does.

I feel like I have a lot of fear that I feel when I hear about AI as a

human in the world that worries that AI is just gonna take over.

So you're thinking of like Terminator.

Yeah, like Terminator situation.

I think of like

irobot where like all the robots take over.

So I feel like I actually am someone two great movies right now.

I know two of your favorite movies two of my favorite movies.

Two of your favorite movies.

I have a lot of fear around AI.

I have a lot of fear around my likeness being used against me because I am a public person and I feel like it's very easy for it would be incredibly easy to do.

Yeah.

So I worry about

videos circulating using my likeness that are not actually me, what I believe in.

I worry about the future for therapists because a lot of people are using chat GPT as like a way to do therapy and it's often

very incorrect.

And so why do you worry for therapists?

I worry about people that are using AI.

Oh sorry.

For therapy, like for the field that people are getting wrong information.

They're using like AI.

That is not

of replacement for what therapy actually is.

I worry about our kids not knowing if what they're viewing is real or not

For the ability for their likeness to be used against them, like in terms of bullying, I think that's gonna become really prevalent.

I see that as a theme that's already happening.

Yep.

Only gonna get worse

And for it like even as an author, as a writer, you know, at what point are people gonna stop making art because it's just like easier to just turn to AI and ask AI to make art for you and like

for people who are creatives, like what is that gonna mean?

How can we use it in a way that's gonna help us?

So like I think I have a good understanding of what AI is.

I think what I struggle with is

fear and that definitely comes into fear for our children as well, even in terms of like intelligence.

Like, are our kids going to learn the same way?

Or are they just going to use AI to write all their papers for them?

They never actually have to do any learning.

Like will teachers be able to come up with curriculums that incorporate AI, because it's going to have to, that still help our kids learn

and develop essential skills or like are we just gonna rely on this for everything and that's how it's gonna take over the world.

So that's where I'm coming from.

Yeah.

You're thinking of a lot of negatives when it comes to the It scares me, so I thought I had named that because I feel like you have different opinions.

I think there are both very positive and very negative possibilities when it comes to AI.

Do you think it's gonna take over the world?

To be quite honest, I think it already has, but not in the way you are thinking

It's not like Terminator where it's taking over.

I don't see that happening because it's far more complex in that way, I think, in terms of the information presented to us.

Yes.

And I think that's already happened.

And that's one of the forms of AI we'll talk about.

But let's say on Instagram or on YouTube or even your news app.

whatever you're being presented with content that you are the most likely to consume.

Not necessarily like, but consume.

It might actually be stuff that you despise, but you're more likely to rage reading it or you're more likely to consume it.

So that's already happening.

And I think that controls a lot of

the way people think.

But to be honest, I think AI in general.

I think a lot of people think ChatGPT is that's like the only AI we really experience and

It's kind of distant from us and all that, but it's it's been used everywhere.

Just that's a generative AI.

So it's kind of like this is a terrible explanation for it, because this is not really how it works.

But if you were to simplify it down to like the most simply

You could explain it.

It's essentially autocorrect, but to the billionth degree.

It's essentially you put in a prompt which is just a series of characters.

and it takes those characters and determines based on all of the information presented to it, this is the most likely sequence of characters.

that makes sense.

So it will provide you with whatever.

It's output.

Whatever whatever that is.

So it's essentially just an autocorrect, but

far more advanced than just auto-correcting a single word.

But that's only one type of AI and it's becoming more prevalent, but it's not the most.

So I anyways, I just I wanted to do that exercise to see what you how you think of it, because I think a lot of people think of iRobot or Terminator or something like that.

And I think we're still quite a ways off.

That's good.

And I feel like I also tried to name concerns that like I feel but I know most people are feeling.

Not most, but like a lot of people in my shoes who

We don't have a tech background.

We're not well versed in technology.

People think I'm good at technology because I have an online business.

I am the worst at technology.

I don't understand it.

I'm terrible at it.

That's very clear.

You understand Instagram pretty well.

That's one.

What is it that you uh

So I think when it someone is like me and I understand the impacts of like let's say screens on kids, which we've talked about a lot.

But like for someone like me

I think AI feels scarier because you've just seen iRobot and The Terminator and movies and shows that kind of paint it as this

terrible, scary thing that's gonna take over the world, right?

So of course I come in with fear.

Yeah, and the reality is I think we're years away from

robots being able to actually accomplish those fine motor skill type tasks because that's a far more complicated problem to solve than generating

a bunch of text for you or generating deep fake video for you.

It's far easier to do those kinds of things for computers.

Yeah.

So I think there's a lot of marketing hype around certain certain robots that kind of look like what you see in in iRobot, but

I wouldn't believe all that marketing hype.

I still think it's quite a few years away until it's actually something that we can even find relatively useful where it's the cost is outweighed by the productivity of them.

Okay

Well, there you have it.

Okay.

Anyways, that's more my personal thing.

The reality is though that it's AI has already been woven into our daily lives and our

potentially our kids daily lives in ways that we might not even notice.

So like I was saying social media, TikTok feeds, whatever, to Alexa or Siri answering homework questions.

So do you think it's crucial for parents to look beyond all this hype and get a real handle on what AI actually is and does in their child's world?

Absolutely.

No, I was actually thrilled for this discussion.

Cause I'm like I for once were asked

asking Scott some questions here, the engineer in you who's interested in these things and I think has a good understanding of technology.

I was so excited to be able to ask you and push back on you finally for some things

But yes, I think we need to understand how it impacts our kids and I feel like for most of us, including myself, not even understanding how it impacts us.

And then you have a child who's like these

Kids are using it in schools all the time.

Like I'm hearing about it all the time, right?

Like not only with the bullying, the deep fake videos and stuff like that.

Like so hard to know what's real, what's not real.

But like

even for homework assignments.

So how are teachers gonna help our kids use AI and to benefit and how are kids going to learn when they can just you know, it's like when we were kids, they were like, You're not always gonna have a calculator on you.

Now we do, right?

But we still had to learn math 'cause we didn't think.

we're gonna have a calculator on us all the time.

But now it's like you literally have access to the world on you.

And you have access to this machine that can be like

Right, a five page paper for me on this.

So how are we gonna work with our kids on this?

Mm-hmm.

And it's gonna keep changing.

Yeah.

That is I think the most challenging aspect and why I've been kind of wanting to hold off on this discussion.

Because

It changes so much that it's hard to keep up with all of the use cases for AI.

Like it can be used in a lot of different situations.

Kids will find ways to use it wherever they can.

Yeah.

Because they're good at using tools and figuring out how to

Let's say not have to do as much homework because I would have done that.

Yeah, hundred percent.

I would've done that.

If I could have, I would have.

And I think, and the research highlighted this, although I can't remember the dates of this, this seems already kind of outdated, but it was saying something like around 70% of teens are using

let's say generative AI, like a chat GPT, but then there's also like video generation, music generation, all that kind of stuff.

But it said only about 37% of parents are even aware of it.

Mm-hmm.

Makes sense.

I mean that to me felt like that was already old-fashioned.

Who doesn't know what chat even just that ChatGPT exists.

Oh, they weren't even aware it existed.

Yeah, like it's basically not even aware of

any possibilities when it comes to generative AI.

See about like how old is that research?

Yeah, I have to look back at uh what I would what I found.

Either way, it still shows that there's a significant

gap.

Whatever that gap is, it should be, I think, the other way around where parents know more.

But that's a question like again, I think I'm one of the only people that I know of my age that grew up with a computer from

I don't remember not having one in the house, right and being able to use it, where no one else that I know actually had that.

But I think let's say I might be one of the only ones that we know that had that situation.

But our kids all have grown up

with incredible technology that almost seems human even.

Yeah.

So these personal assistants like Siri and Alexa and whatever.

they almost seem human and it's just integrated right into their daily life from the day they were born.

So I think there's p potentially this gap of parents just being older and not having grown up with it versus kids are

That doesn't mean we shouldn't learn about it.

Yeah, totally.

Or you shouldn't learn about it.

But how would you even do that if you don't know anything about it?

Search on chat GPT.

Teach me about AI.

Yeah, exactly.

So what I was finding there is a fascinating distinction between different types of AI, which

let's say I've known, but maybe a lot of other people don't know.

It's not all the same.

Like we have generative AI, like ChatGPT, or there's VO from Google which does videos and there's music generators and all that kind of stuff.

There's recommendation algorithms on YouTube or Instagram or other social media platforms or Google.

There's even AI companions like Replica, which

essentially are trying to be these characters to become friends or something with the person chatting with them.

So do you think you could break down why it's so important for parents to

to potentially know the differences between them.

And for example, what would the psychological risks potentially be of a recommendation algorithm on YouTube, let's say, versus a voice assistant like Siri

So let's talk about the recommendation algorithm.

Like I do think the algorithm's important to understand and teach your kids about.

Like I'll give you an example.

I was on Facebook on my computer when

Facebook first came out, right?

Mm-hmm.

And I remember following like I think it was like Huffington Post or something like that.

And they would put out a lot of news articles and I started reading them

Right.

I've never seen this kind of news article before.

It was all new to me at the time.

And then more news comes up and then more news.

But it's all kind of like a similar type of news.

And I was probably 18 or 19, right?

And that's all they see

on my feed and I didn't understand that the algorithm was just feeding me already then what I was reading and wanting to see.

So I just start to think, wow, this is like

Everyone feels this way.

Like this is how the world is now.

Things have changed.

Like there's a lot of feminist stuff and things like that, right?

And I loved reading it.

And I was like, man, everyone must feel this way.

Like everyone must be really feminist and everyone must like have these points of view.

Cause look, like so many people are talking about it.

And like I

didn't understand that the algorithm was just feeding me more of this content.

I just thought everybody feels this way until you start talking to people and you're like, oh, they actually don't feel this way.

You know what I mean?

I think the greatest example of that is honestly

during elections.

Elections, yeah, exactly.

Right, at least you saw in the US and even in Canada here, the elections are significantly closer than what you would expect based on what you would read on or see

on YouTube or read on or see on Instagram or something like that or Reddit, whatever it is.

Yeah, because whatever your political leaning is, like you start to watch those videos and then that's all you see.

Totally.

I remember saying that to you before and being like, Everybody feels this way, like that's all I see online and you're like, No, that's just it's all you see 'cause that's what you're watching, right?

And you start to watch someone from like the di a different party and all of a sudden that's all you see.

And then you're like, wow, everyone's voting for this person.

So I think helping kids understand that, I specifically I'm thinking about like boys and girls in terms of like the content that's out there for boys, that can be really harmful, you know, the manosphere, we've kind of talked about that before

you know, how to be a man, but like it's really toxic.

Or like for girls, like body image.

I was doing a deep dive recently on body image for girls, like types of content teens

C and I got a lot of like pro anorexic content come up on my feet.

And if I didn't know that I was specifically trying to train my algorithm so I could see

what type of content teen girls are seeing, I would think that that's what everyone believes.

Yep.

So I think that's where the algorithm's important to explain to your child.

Like this isn't actually necessarily reflective of everyone's views, but once you watch one video like this, your whole algorithm is going to start to change.

to help kids realize that not everyone feels this way, right?

I mean that's that's an oversimplification.

One video doesn't necessarily

for several reasons.

It needs to see patterns and it needs to see that you've engaged with it and it needs to make sense.

Like essentially uh the kind of like search engines and feed

It's trying to guess what is gonna keep you active on there for as long as possible.

It's a bit of an oversimplification to say it's one will change it.

Like it will

have an effect but it the weight becomes heavier as well.

But I think just for kids

or teens to know that just because this is your entire feed, that doesn't mean this is what everybody thinks.

I think that's important.

The social companions actually

I wish I looked into this a little bit more, but Common Sense Media just came out with some research on AI and social companions, basically saying what we all think, like it doesn't replicate a human friendship and can be quite dangerous for teens

I think we really should be aware of that because you know how many social companion people have tried to get me to sell their products?

Like people who develop these like the companies?

The companies.

People who develop these social companion AIs for teens and tweens.

I get them in my DMs every day, being like, hey, can we set you up with whatever it is?

Like

We'd love to introduce your tween to so and so.

I'm sure they would.

Right?

And I'm like, no, get out of here.

Like no matter how much money you'd pay me to promote this

Would this ever happen?

But we do need to be aware of that too.

I'm gonna skip ahead to uh question related to that and then we'll go back because the research actually

takes a very strong stance on AI companions, especially for teenagers.

And we'll get back into the concept of anthropomorphism where

you're giving this inanimate object or you feel like it's actually human or it has human like qualities.

But it says w some experts

are saying that no one under the age of 18 should use them at all due to the risks of emotional dependency and receiving harmful advice.

This might be a tough sell for teens who feel lonely and have found comfort in a chatbot

So then how can parents navigate this conversation validating their teen's feelings of loneliness while holding a firm, clear boundary on these kind of specific apps?

Yeah, I'm not gonna lie, like when I hear that research, I wanna like shake the table and like flip it.

It's like, do we really need something else to replicate what we need to do in the parent-child relationship?

Like it's so anti everything

Attachment, everything, child development, everything related to that to get your child's need for belonging met by

a chatbot.

That's not real.

You can't make eye contact.

You can't have a warm hug.

You can't Yeah, well let's say in the future they can do that.

So you have nice Terminator.

Right.

That looks at you and looks human and talks to you like they're human and all that and gives you time and attention.

It will never replace

the role that a parent needs to take in their children's life.

It irks me to my core because it's like, we would rather be like, my child feels lonely and they're having a hard time.

Let's set them up with a chat bot where they can talk about their feelings

Instead of let's understand why my child's feeling lonely and let's try and connect with this child and figure out how to get their need for attachment met.

Like I have a feeling it's not parents that are

telling their children to do this though.

No, totally.

But kids are finding it.

They're finding it and they're starting to chat with it.

Yeah.

And it seems real, honestly.

Like even talking to chat GPT sometimes or whatever, Gemini or Grok or whatever you like.

You're like

No, for me it's usually annoyance that it's not following instructions if I asked it to summarize something or

Whatever.

No, I totally get it.

Like I feel like teens just find this on their own.

But again, why are they seeking it out?

Right.

And when we find it, are we just being like, oh, it's really nice actually.

Like they're able to talk to this

chat bot and they feel really seen and heard by them.

Like I'm so glad that they have an outlet to me if my child is talking to chat

an AI and telling me, oh I feel so seen by them, like they really get me.

I'd be like alarm bells ringing, like Yeah, again I don't th I have a feeling that is not what's gonna happen though.

Like I have a f

feeling that parents are not going to feel that way.

No, I know.

I'm just saying this is how I would feel.

That's how I want parents to feel.

But I don't think that's what's gonna happen.

A lot of parents will be

like maybe have no idea.

And again, this whole conversation is basically we're at the infancy of a lot of AI systems.

Even honestly the oldest systems are not that old.

No.

No, it's so new.

Largely they've been created within our lifetime, right?

Which is new in the grand scheme of human history.

But this is where, back to your question, should we talk to our kids about these things?

Yes, we definitely should

'Cause it's even like yeah, it's not like we're gonna necessarily set them up with this chatbot, let's say, the AI chatbots.

But maybe they hear from a friend at

school who's been chatting with a chatbot and they really find it helpful so then they're like hey I want to try that out and then they try it out.

So my question to you though is if you did not have me, how would you know about any of these things?

And how would you educate yourself on

any of these things so that you could actually talk to your child about it.

Because that also means you have to be following, let's say, news outlets.

And when it comes to the different types of AI systems that exist

You almost have to follow tech news.

So you might be shocked by this, but I actually do follow some tech news.

And I am shocked by that actually.

Yeah, and I actually have taken a special interest to looking into AI.

And I've actually been following a lot of the news that's been going on with AI, AI chatbots, because not only in my work, I see this, I see the bullying that can happen.

So I think as a therapist, and if you are a therapist who works with kids, like you do have to kind of be in the know on these things.

Like I don't think it's enough to be like, it's all bad, you know.

I'm trying to be in the know

Am I perfect at it?

Absolutely not.

Like I'm trying to learn a new language.

It actually really gives me compassion for the parents listening to this podcast who are hearing me talk about like parenting so easily and it's must be

be similar to how you feel when you're hearing me talk about all this stuff, like you're learning a new language, right?

And I feel like with the tech and with AI specifically, that's what it is.

But I am trying to be in the know.

I'm trying to, you know, sometimes I'll use chat GPT just to s kind of see what they'd have to say about something, fact check it.

A lot of times they're wrong.

I think even in terms of being a creator, creative online

I see people using ChatGPT to write posts constantly.

I don't do that.

I write all my own posts, which I feel proud of it.

My tagline for myself is AI could never.

And I try and uh constantly outdo what AI could ever do.

I always joke with Jess that I could use ChatGPT to write something better.

And then she said AI.

She always says AI could never.

That's why you don't see my poster like five tips for whatever, because I'm like, you could search that and

constantly challenge myself because I want to create something for folks that they can't just get with AI.

But sometimes I'll ask AI, huh, make this or whatever.

They could never

One might argue that you're just not prompting generative AI well enough.

Yeah.

So

Self-fulfilling prophecy.

Yeah, exactly.

Bias.

Bias.

That's confirmation bias.

If I ever heard it.

So then the other thing that I try and think of for myself

is yes, I'm very negative about it all, but also there's a lot of positives.

And so I'm trying to think along the lines of like where would those boundaries be for our kids?

Like something like a chat bot

It just goes against everything I believe in and so I don't like that and the research backs it up.

But something like using it for a school project

my thought for that type of thing is like, okay, well then similar to me with this podcast where like I'm sure you use some side of AI to help you form up your research, right?

Or similar with like the

The book if I'm gonna use AI to help me find research, but it better challenge me to do a better job than if I didn't have that.

And so I think that in school, like if kids are gonna be using AI for homework assignments or whatever, then we should up the standard in which we expect them to come up with a paper or write something or whatever.

Like I just think

things ha like the standards have to change or our expectations have to change.

Again, I don't know what you think about that, but we should be teaching kids how to incorporate it in a healthy

way that's gonna actually help them in the future.

Because if you just say no AI to your kids, that's also not helping them because there's no way this is not a part of all of our future, right?

Yeah, I mean this goes back to was that last year?

No, two years ago

Went to a conference and we were talking about AI in terms of design.

Yeah.

And this speaker said something that really stuck with me.

Just before we were in high school or grade school, I can't remember.

What do we do?

Trigonometry.

Anyways.

I blocked that out, to be honest.

I wish I had AI I wish I had AI for.

Just before that time.

you would have to look through a table to find the answer for sine, cosine, tan.

And then we get to school and

Like not long before we got to that stage, we had like the TI-83 calculators, right?

And I could even graph on them

But before that, you had to look manually through a table to find an answer you were looking for.

And I think AI is similar in

that it's a tool that will be used.

I think it's a far more complicated way of integrating it because it can do so many different things and like when it comes to knowledge work, it's not necessarily replacing knowledge work.

It just makes

I don't know.

It's going to make it so that we have to do far more to stand out from anyone else.

That's what I'm trying to say, right?

Like use it as a tool.

But I think we're at such uh a new stage of this that I honestly feel kind of bad for the kids that are in high school right now or in university because

They haven't caught up.

Yeah, the education system in Canada at least I don't think is caught up with where we are.

No, exactly.

I have so much empathy for the kids

let's say in university and like high school right now, 'cause A, they had to go through the pandemic.

So a lot of their school years were like they weren't in school or they're being at home, whatever, the important social years.

And now they have AI.

Like I just feel like they've had a lot and they were the first generation probably that had their own phones, iPads.

all of that earliest on without all the conversations that we're having.

I feel so bad for that generation.

Like there's a lot of good that comes out of it, but we are very hard on that generation.

And I think we have to think about like how much they've been through.

It's a lot.

So I really have a lot of empathy for them.

Also I think that makes sense.

We've I mean we've talked about it many times.

And I've said it specifically regarding AI that I think I don't think is terrible.

It's just you have to

be interested enough in figuring it out outside of your school environment to actually use it properly.

And still be someone who can think critically and use it.

The thinking critically your work process up a bit.

Thinking critically outside of it is the key, I think.

Yeah.

Right?

Like if you just use it for everything.

I mean at this point that does not work.

Maybe at some point in the future it will.

But like one of my favorite use cases it

for it is I created a meeting notes or meeting transcription summarizer for our team.

So we'll sit in a meeting, I'll put voice memos on on my phone, and then once it's done I send the transcript.

to my own AI software that I created that will summarize it specifically for our team so we can look back at it.

And it's in a Google Doc for us to refer back to at any time and create tasks and all that.

So that we don't have to focus on writing notes ferociously during the meeting and we can actually pay attention and have a good conversation.

So I think there are definitely ways that you can use it, but then there are other ways where it just

It takes more time to use it than it actually will benefit you anyway.

Yeah, I think if we can use it to our benefit, for example, even pulling together research for this podcast episode.

Right.

It's not like you just pull it together and then we just read it off.

We still have conversations.

You still are critical about what you're reading.

We're not just taking everything.

It says like that's why it takes you so long to come up with this.

But if I had to search for like this one found slightly fewer, but

In general, our episodes have between 30 and 50 somewhat related to the topic, peer-reviewed articles from scientific publications.

Yeah.

If I had to find that all myself, it would take me days.

Well that's the thing.

It's like I now that's where I see the benefit.

Because then we can have this conversation using research.

Yeah.

But we're still talking about and we're not just like

Here, let's just get Jess's voice to read the script that you came up with of all this information, right?

Like but we can have such a

robust conversation because you're able to get all of that.

In university when I had to do research like that, that would take me like three days to come up with all that research.

And like my degree was very research heavy

That would have sped up the process so much for me.

It has, I will say, sped up the process probably more than ten times for me in preparing for these episodes lately.

Just because

Also, I know now what to ask it in order to get my list of peer-reviewed articles.

And then I can go through and I can actually read them.

Well at least read the abstract to see, okay, is this one related?

Okay, no, this one's not.

This one is, this one's not, and then I can like weed them out way quicker that way.

Yeah, so critical thinking still has to be done.

And we up the expectation of ourselves.

So I definitely don't think it's all bad.

I just think we have to keep the critical thinking, keep the creativity, keep these other things, and we can't just

We have to teach our kids to keep those things and not just rely on AI and just assume it's like the end all be all to like all the answers in the world.

Yeah, I think that makes sense

Hey friends, so at pickup last week, our daughter asked Scott a truly kind of tricky question in front of her younger siblings.

Scott was telling me that when he heard a question like this, he used to panic, but this time he had a plan and he said

To our daughter, thank you for asking.

Let's talk tonight when we've got privacy.

And that's a line that he learned straight from our new body safety and consent course at Nurtry First.

So this new body safety and consent course is taught by me.

So

Jess, if you listen to this podcast, you know me.

I'm a child therapist and a mom of three, and I have taught body safety and consent education for years

This course takes all my years of experience teaching this education and gives you calm, age-appropriate language for body parts, consent, and boundaries.

You'll learn how to teach your kids that no means no, you'll learn how to teach them to read facial cues, you'll talk about safe and unsafe.

safe touch and you'll even teach them about their uh oh feeling.

There's guidance inside this course for the real life stuff like tickling that goes too far and even the difference between a secret and a surprise.

We made this course at Nurture First because research shows that body safety education helps kids speak

up sooner and we want that for our family, for Scott and I, but also for you.

So check the course out at nurturefirst.

com slash body safety and to save 10% use the code

Robot Unicorn.

And just full disclosure here, we are the creators of this course and we're so proud of it.

Okay, let's move back a little bit here.

A core idea in the research that I found.

is that most AI isn't built for kids.

Which I mean, that's just in general.

You don't even have to look at the research to know that.

A lot of these software systems are built for, at a minimum, someone 13 years old

if they have their parents' consent, but it's typically meant for adults.

Okay, good to know.

Well that's just that's how all the software in general is supposed to be for child privacy reasons.

But it's built to be a sophisticated guessing engine.

Like I described, generative AI is guessing what text you want or what video footage you want or

what music you want, right?

Or an Instagram feed is for guessing what kind of content you want to see there.

And it's really, again, optimized for engagement and data collection.

Because that's how the court they make money.

Yep.

So as a therapist, what does it mean for a child's developing brain to be interacting so closely with a system that's designed to be persuasive and addictive rather than nurturing or educational?

Yeah, I feel like we've talked about this in so many other episodes as well, but it interferes with the development of their brain when we give them a a device that's

designed to be addictive.

Right?

We need to give our children's brains time to mature, to develop, to do these things that you could only do in childhood, like be curious, be creative, have time for free, open ended play, have time to like try and learn the world on your own.

And so when we've replaced that time that's really essential for children with a screen, whether that's AI or not, we know that that is not good

for their developing brain.

Right.

So that's just like we know that.

Now if you add in AI and basically access to what feels like all the information in the world at your fingertips where

My fear is like you don't have to think anymore, right?

Like you don't have to spend any time like, oh, I wonder.

Like I feel like there's like you're gonna be so annoyed, wisdom in the wondering.

Like obviously I'm writing my books, so I'm very like no kidding

into this right now.

But I feel like there is wisdom in just wondering without an answer to something, you know, where kids are like, I wonder why that example, the sky is blue.

And instead of like, why is sky blue, chat GPT, and then like all the answers, like they might spend some time thinking about all the reasons why the sky could be blue, and then we can help them come up with the answer, right?

But if kids never have time to wonder, pause, just reflect on like why they think something could be, and they just instantly have an answer, like I think it disrupts

the ability to think critically, to ask questions, to come up with your own answers.

Like I think it's really important that kids have the time to do that.

And they worry that they're not going to.

Yeah, and I mean we do have that information at our fingertips.

Again, you have to think be able to think critically enough to understand is what you're being presented with even true

Right, exactly.

If I'm a kid, like when I was a kid and I found something on the internet, I just assumed that was true.

Yeah.

Right?

So like we as parents we have

even more of a job to help our kids understand just because you read something online does not make it the truth.

Yeah.

Honestly one of my favorite use cases for AI right now.

is going through our garden and taking pictures of different things or like let's say a tree in our front front yard looks like it has a disease on it.

Mm-hmm.

So I take a picture of a few leaves.

I took a picture of the overall tree.

I take a picture of the bark or the

the trunk and then I send it off and say what kind of tree is this and what are the possible reasons that the leaves look like this during the middle of the summer?

Right.

And I ask it again to provide some links to things that I can read outside of that just to see where it's getting the information from because often it's wrong.

Yeah, it's it's definitely not right all the time.

What I like using AI for is things I hate to do.

So like formatting.

Like I don't like I don't like formatting things.

Like if you're in Google, we have like Gemini on there, right?

And I literally just have to prompt it like

format this and it just formats it for me.

So like it takes away the tedious work that my brain doesn't like to do.

So there's like simple things like that that I think.

It just speeds up the process to make

Something that's even better.

Yeah, exactly.

But it doesn't it should never replace and again that's generative AI.

Yeah.

That should never replace critical thinking skills.

Yeah.

I think one of the biggest

safety guards that we need to think of for our kids is helping our children understand that any AI they're interacting with

is not a real human.

It's a machine.

Because it was very interesting.

There's a bunch of research on this and the concept of anthropomorphism, where young children especially will see AI, something like Alexa or Siri, as a real

feeling person because why wouldn't it be?

They can talk to their grandparents on the phone or something like that.

So why would the little speaker that talks back to you not be a real person?

Yeah.

And legitimately, children have a hard time being able to differentiate between human and machine at this point.

And it actually the research was suggesting that this can even lead them to trusting AI more than

humans, which is interesting.

It's not surprising to me.

What do you think are the long term social and emotional consequences?

I mean we talked about this when it came to companions.

When I wanted to flip the table upside down.

What are the potential

consequences for child learning that if they interact with who they think is a person, Siri is actually a machine without feelings, empathy, or actually any understanding.

Well children like just naturally want to believe that the person they're talking to like cares about them, right?

Like I've even heard our kids like, Oh sorry, Siri, uh not that one, you know.

Or even me, like uh typing something into chat TBT please.

I'm like, I don't need to say please to you, like you're not a human.

Although at one point I don't know if it's the same now, but at one point a more polite prompt

Sometimes gave better results back for certain LLMs.

Well see I am specifically making sure I do not think that this is a person I'm talking to, so I've been taking all the pleases, sorries, like not say anything like that

But our children crave belonging, connection, togetherness, right?

So of course they're gonna seek that from any voice that they hear.

So I do think it's really important to make sure that those needs your child has are not being met by technology because

Technology will not be able to truly meet those needs, right?

And it will leave your child craving more again, as always, has to be met by us or by other loving

people in their life, like grandparents, aunts and uncles, friends, like not by AI.

I think it just makes sense for us to talk about how I mean

This episode is very related to the previous episode of screen use in general.

But research around AI is similar to screen use where it's a good idea to have

unplugged skills and be able to play outside of using any AI for help or using any screen at all.

You need to develop other skills

and play and be able to interact with someone by looking them in the eye and not having to look away constantly because it's too awkward for you.

And so do you have any activities potentially that parents could use to maybe reinforce the idea of

Unplugged skills.

I mean literally anything.

This just doesn't involve a screen.

I've talked with us already in this episode but as someone who

is also creative and I feel like in my earliest days all I did was write in my diary, write songs, write poems, write plays, like write, right, right, right, right.

Like I've been writing for

years and years and I could see how kids maybe wouldn't develop or need those outlets when they could be on a screen or they could ask AI questions or they could prompt it.

Write write a play for me on this

So I think even that, like making sure your child has creative outlets with no screens, like they have enough time to be bored enough to be like nothing else to do.

I'm gonna write a poem or I'm gonna make a craft or I'm gonna create like you like to

build things.

I'm gonna build something, right?

Like when do our kids have time to be so bored that they want to do that?

And I want to make sure our kids still have that need, privilege, whatever, gift of

boredom and it's that's not all replaced by well I could do this with AI, you know.

There's too many options on there that are gonna really get that dopamine hit for our kids met and we need to preserve their ability to like learn

Creativity, critical thinking, curiosity.

Well, not even learn.

They have all those skills.

Just preserve it.

Yeah.

In those early years.

Yeah.

Okay.

Okay, I have an interesting one for you.

Could you potentially walk us through, and maybe you completely d disagree with this line of questioning, but can you potentially walk us through

how maybe you as a parent could sit down with, let's say our oldest is now 10, she has to write an essay on something, and

how she could potentially use ChatGPT for a school project in a way.

Yes, you.

In a way that builds critical thinking instead of encouraging them just to copy and paste the answer.

Oh my gosh

This, yes.

I actually look forward to this day.

So I think let's say they have to write an essay.

Or even just do any project at all.

I would not be encouraging my child to type in like do a project for me on

Canada or whatever would not be doing that.

I'd be like, okay, so what do we want to research?

Start there.

See if we can get some research articles together

And then I would want to teach my child, and maybe this is because I've done research for my whole career, right?

Like how to critically look at the articles that we're seeing.

So we'd like read everything through instead of just like assuming that these are all correct

and try and get my child's take on what they're reading, you know, like, oh, what does it mean that Canada has this many provinces or whatever it is, right?

After we have all the research, then if she wanted to do another prompt, right, we could be like, give me some ideas of how I could creatively showcase this on a board or whatever it is.

But I would want my child to have a say in the process of what's happening, not just like, hey, write this paper for me or make this project for me.

Like

prompts that maybe encourage like ideas forming but not the finished product.

And I would be very mindful of

allowing my child to even like prompt it to like put the finished product in.

Like I think and I know that's how my brain works.

As soon as I read something and it's like that's how it is, my brain's like, okay, well that's how it is then, you know?

Even when I'm creating on let's say social media, I'm never consuming.

Like I don't consume other people's other parenting pages content because I don't want that idea of like that's how I should write it to be even in my head

So I think you could use it to gather research, but you could help your child think critically about like is this true or not.

You could use it to

maybe give you an array of ideas of like how to do something, but then your child could choose what they want to do.

So I could see it being helpful in that way in terms of like we were saying.

it could cut out some time that you would take researching, but then that time should be spent by, I think, encouraging your child to, okay, well

You just saved three hours on research, so now how are we gonna spend that time making this project different, unique, better?

What kind of creativity can they bring to it?

Like I don't know.

So

That's just me, but you might have a totally different answer.

Yeah, and I don't know if the way I use it is right or wrong, but often I use it to help in the brainstorming process.

So I have like certain prompts that I've

created and saved that will get it to actually prompt me in return and get me to think about a problem that I'm working on

in a different way.

It'll be like ask me ten questions that will help me think of alternative ways to solve whatever X problem.

And I actually use it that one quite often just to help me open my mind to other possibilities when it comes to even like a podcast episode, I'll ask it to prompt me so that I can think of it.

So it's still me doing the critical thinking and it's just

it's prompting me with those those questions.

It's usually that stage and some brainstorming for whatever a project.

And then it's usually the final stage where it's I've already kind of completed it and it's 95% of the way there.

And then I ask it, what are some ways that I could improve this thing that I'm working on?

So it's usually like beginning and end, but and again, maybe I'm using it wrong, but

It can't do any of the middle part.

It just doesn't work.

Yeah.

For me.

And I think understanding your child and their brain and what's gonna work for them probably makes sense too, right?

Like for me in a brainstorming stage for like something creative

I need to see nothing.

I need to just be alone with my thoughts and let my thoughts kind of come up with something.

Like that's how my creative brain works.

Your creative brain maybe needs that stimulation of like the prompts, the question asking.

For me, may I think I might get stuck if I was asked too many questions.

So Well my actual way of doing it is I outline, here's the problem, here's some of my solutions that I've already created.

And then it's asked those questions.

So it's usually I try and do it myself first, but I don't like to stop there where it's just me doing it.

I want it to

prompt me to figure out if there's alternative ways to fix a problem or to think about, let's say, even this podcast episode.

So I've already come up with

possible solutions and then it just helps me potentially think of more.

Yeah, that makes sense to me.

And then yeah, research like it searching for through

millions of scientific publications to help me find some that are related is incredibly helpful.

Yeah.

Like and we should use it in ways that can help us.

And if y your child never learns that, they will be behind, right?

Because

Inevitably other kids will learn how to use it in helpful ways.

And like it's kind of gonna be maybe at ten they won't, but maybe not at ten, but I think within their education

They're gonna have to learn how to use AI because it's just it is gonna be a part of life.

So I think we've covered a lot of risks today.

We didn't even cover cover things like deep fakes, which are videos that

show someone else's face potentially on someone else and makes it look like you or some political figure is saying something they actually didn't say and it would be easy for parents listening to feel overwhelmed.

And just wanna ban everything because I th truthfully it's complicated and it's integrated into everything we're doing already online.

So

I want our goal to be moving from this idea of protection to more of an empowerment for parents, because I think that's important.

And if a parent could focus on

one or two key things to empower their child to be safe and resilient digital citizen in this relatively new AI world.

What do you think they should be?

Well, one I just wanted to name what you said, which is helping your child understand that AI is not human and that it is technology.

It's a tool for them, but it's not a friend.

Like I do like just the idea of having that.

And the second is along with that, like seeing AI as a tool.

So it doesn't replace critical thinking, creativity, curiosity, all of these things.

But maybe even as a parent, if you've never used AI, like trying to figure out how

It can be a tool in your own life or just even playing around with it a little bit so you can start to understand what it is.

Again, you are talking about generative AI, so large language models like ChatGPT and stuff like that, but there's a lot of other ones too.

So even just trying to understand it more like Scott said to me, like you have to look at tech news.

You have to like try and be in the know.

But again, in a previous episode and that one before we talked about You don't always need to be in the know.

I know.

It's hard.

So

That's why I thought this conversation was interesting.

And I don't even think we got through half of the questions.

I personally think we need a part two.

But people can let us know.

Yeah, maybe let us know if you want to hear more because there is a lot more that we could have covered.

Look, I thought I was gonna be bored to death by this episode, but honestly, I was very fascinated by this entire episode.

So thank you.

You're welcome.

I tried to keep it less technical for you.

It's all about the children.

It's about a relationship.

What's your one tip?

I want to hear from you.

I think it's important to remember that let's say we're talking about a large language model or any of those

tools in that situation, it's important to remember that it is a tool, but if you can try and do something without it and come up with something even better, that's only going to benefit

your child more.

So to try and get them to do that first always is I think the best way to handle it.

All right.

Well thank you for listening.

I hope that everyone found this as fascinating as I did

And interesting.

Yeah, I hope so.

And uh I hope people will let us know what they think about AI.

I would really love to hear feedback on this one for sure.

This is gonna be a continuous conversation.

Honestly, it's gonna change.

So we'll be bought

Oh nice.

Right.

Oh my goodness.

That was an amazing reference, right?

Okay.

End it there.

Thanks.

Bye.

Hey friends, thank you so much for listening to today's episode.

Episode, we are glad that you are here.

If you enjoyed today's episode and found it interesting, we'd really appreciate it if you'd leave a rating and a review.

Scott and I actually sit down together and read them all.

A five-star rating helps us share our podcast.

get these important messages out there.

Thank you so much for listening and we can't wait to talk to you again next time.