Welcome to The Agentic Insider. Join host, Phillip Swan, each week to explore cutting-edge ideas and trends in AI and Data as well as hear from other industry thought leaders in the AI space. This podcast is brought to you by Iridius.AI —committed to Safe and Responsible AI innovation.
TAI - Tess Posner
===
Speaker 2: [00:00:00] Welcome back to the Agentic Insider. I'm your host, Philip Swan. In this show, we explore cutting edge ideas and trends in AI and data, as well as hear from other industry thought leaders in the AI space. This podcast was brought to you by arid AI committed. To safe and responsible AI innovation. Let's dig in.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: Welcome to this week's edition of the Agen Decider with me and Alistair today is somebody who's worked on resources who you know, who are really looking to learn more about technology in the AI industry, and helping people learn. She's a multi-award entrepreneur focused on AI education for everybody, and she's a leader of her own self-titled.
[00:01:00] Consulting firm, and she's also a music artist who writes songs about empowerment. She's the founder and interim, CEO of AI for all. Tess Posner, welcome to the show.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Thank you so much for having me. It's great to be here.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: So, Tess, like we ask everybody else, what future are you solving for?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, I mean, I think about this question a lot, um, because my work deals with young people, so it's all about kind of the future and the future that they're inheriting from us and the future that they're going to create. So I think for, for me personally, and our work at AI for all the future that we're trying to solve for is one where.
The benefits of a technology like AI are widely shared and it lives up to its potential to actually solve some of the problems facing humanity today. And I think that's kind of the promise of AI is that it amplifies our ability to solve problems as humans. So I think my, [00:02:00] my hope is that we can steer it in a way that it can do that for more people, and that those benefits aren't just concentrated among the few.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: I think that's fantastic. And I mean, you, you are, you are, you are the interim CEO but you're also one of the, the founders for, for AI for all. So what, why did you start AI for all back then? What was your, what, what was the push that, that put you in that direction? Why did you want to, to take this on so strongly?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, thanks for asking. Yeah, so actually, um, the organization was founded by Dr. Fei Fei Lee and Olga Rusky and Rick Summer. They were actually all at Stanford at the time. So Fei FEI was running the Stanford's AI lab and Olga was there with her. And Rick was also, uh, running several programs and they all had this history together.
And I think what fei fei and Olga being like premier researchers in the field of AI really saw this opportunity for the technology to be something important about. [00:03:00] Humanity and about the future, and at the same time seeing how the opportunities to get into ai were not. Widely available to everyone. So they actually just on their own kind of a side project started a summer camp for girls at Stanford and that pilot was so successful that they decided to start AI for all.
And that's when I met them and came on as the founding CEO. I was just coming out of some work, um, being done with the Obama administration on tech hire where we were helping all different. Communities across the US kind of tap into the opportunities in the tech sector and seeing again, how technology can be this incredible opportunity for upward mobility and economic opportunity.
But often it's very concentrated. So I think the same ethos is kind of where AI for all grew out of is like that very personal passion and experience of. [00:04:00] Olga and fei. FEI and Rick. And then we, um, when I first joined. Really taking that idea and taking the spark and the magic of that and expanding it out to other communities, other partners, other universities.
And fast forward eight years later, um, we've been doing this work for a while and it's been absolutely amazing to see what our students do in the field. The kind of, um, ideas and talent they're already contributing to the AI field is just super inspiring. So. Yeah, it's been, it's been an amazing journey.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yeah, I mean, you are, you're really incubating and amplifying talent here, I think, which is fantastic.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: I love that way of putting it. Exactly, yes.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: So as you're doing that, I mean, you working with youth, I mean, ethics, and, uh, transparency absolutely. Has a critical role here in everything that you do, right? So, I know you've been working in AI ethics, you have since before it was [00:05:00] mainstream, right? And you, it's, so how have you seen the conversation around AI ethics mature.
And what critical issues are, do you think we're still missing?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, that's a great question. Yeah. It's interesting when we first, you know, AI is not new, though, everyone feels like it is because of the explosion
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yep.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Of the, uh, LLMs. But at the time in 2017, the conversation was kind of about. We, I don't think we called it, I think we were more calling it AI ethics at the time.
Responsibility kind of came a little bit later as like a buzzword. Um, and I think at the time it was, there was a lot of talk about bias, transparency, um, regulation like and governance. So how are governments gonna think about this? How should companies think about it? Is this like a. Something that governments should control [00:06:00] through, through policy.
And if so, how quickly can that happen? Because policy often kind of is a, is a lagging responsive indicator, right? So. Especially when you think about the US in particular, like a lot of this innovation is like shooting ahead of where policy is able to keep up. Um, so there's a lot of debate about the role of the private industry in like safety and security and then where government should fit in.
And I think lots of debates about that as, and comparing the US to other countries like in Europe and in China, where this is. Thought about very differently. Um, and I think there was a good conversation about bias and how sometimes because AI was starting at that time to be embedded within different systems, like deciding who gets a job, deciding who gets through at the airport, immigration [00:07:00] systems, policing systems.
We were starting to see this sense of like, okay, if, if bias is baked into ai and then. This is being kind of, we're outsourcing decision making about these really important things. Then some people might be adversely affected and AI systems could amplify existing societal biases, especially for certain like marginalized groups.
So I think the conversation back then was, um. Kind of, you know, I get sort of ahead of its time, I guess in, in the sense that people in the know and in the industry were already talking about a lot of this and doing great work and thi thinking about it. But then when the explosion of, really, when chat GBT, I think it was December, 2022, that kicked off this whole new era
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yeah.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: And the conversation has evolved.
Like you're, you're asking, I think, much more attention is being paid. To this because everybody [00:08:00] is experiencing it. Like I was just talking to my parents who are actively using chat GBT, and they are asking health questions, right? And it's like, I don't, how do we help people ensure that the answers you know are correct?
I mean, that's just a very basic, basic thing. And so I think the fact that it's like touching all of us and that we are using it. You know, however many, hundreds of millions of users, something like chat, GPT has, everybody's asking it different questions. And so whether they're using it for therapy, for health, for navigating a job application process, um, just loneliness, you know, whatever it is, these things are touching all of us.
And I feel like that's really important because now everyone's kind of asking these questions and seeing. Is it accurate? Is it right? Like how do I trust it? So I think what's happened is the evolution of these questions [00:09:00] have gone from being like the experts talking about it to everyone being curious and kind of wondering about where this is all going and is it good?
Is it scary? Is it bad? Um, of course, like there's a lot of hype too.
So being able to tell. What's real and what's kind of just the buzz is challenging.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: It is.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Um, I do think, one thing that I find very interesting is like the mental health aspects. So you've probably seen those articles recently about people like actually being hospitalized for chat.
GPT Induced Mania, I think they were calling it.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yep.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: So I feel that, you know, to your point about what's being missed, I don't really know if we have clear answers or clear ways of dealing with the kind of human impact of these systems and the unintended consequences. It's like, well, we want it to be really useful, but if someone's having a manic [00:10:00] episode useful is driving them further into that manic episode.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: yeah, what I'm curious, how did the students themselves. What insights did they have with that have surprised you the most with, with respect to AI ethics? I'd love to hear more about that.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, the students are so much of my inspiration every day. Um, so our program, just to share a little bit more details, is right now we're, we're running an AI accelerator program for college students and they come from all over. 150 colleges and universities across the us. So very diverse group from all over.
And we basically teach them hardcore machine learning coding skills so that they can actually start building and working on AI projects. So the second part of the program is they work on an AI project that they're passionate about. And I think this is where it's really exciting because. [00:11:00] You see already students bringing such different ideas and perspectives from their own lives.
Um, some of the projects are like. Predicting Alzheimer's, um, predicting credit card fraud and how to better, um, track that. Uh, how do you look at whether startups are gonna be successful in the future in building an app to predict that? Certainly that would be a valuable, a valuable tool. Um. Lots of things in environment and climate I think is of high interest to students.
Some projects in music and creativity and so we have like hundreds of these projects actually running right now where students are working with mentors in the field and we really help them get this view of the responsibility and ethics piece. So saying this is a really powerful tool for solving problems.
And they get to actually do that on a [00:12:00] hands-on way, but also keeping in mind like, okay, where are you getting the data set? How do you know if it's a good data set, whether there's bias in it, and then what are the implications of this tool? Where might it have unintended consequences? And so they're engaging with these ideas and they're building their critical thinking skills to not just.
Work on the tool, but also thinking through those implications. So I find that really inspiring because I think, you know, a lot of times, um, young people are not always part of the conversation about ai and they actually have so many brilliant insights to bring to it. Just about like, what should we be thinking about, um, where is this going?
What are they worried about? And. There's, there's a lot more that I can say there, but I think what I see every day is just how much power there is and including more voices in AI and [00:13:00] seeing what if they kind of have the tools and the resources and the support, there's so much that they can unlock with those skills, you know?
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Absolutely. I mean, I think that's, you know, both Philip and I, you know, I, I'd speak for him here, but we, we feel that the future's very positive. Okay. We see an, we see, you know, one of the. One of the responsible AI principles that we have at arids is around human AI collaboration. We want this to be a way of being able to stand on each other's shoulders and use it as a way of, of amplifying and augmenting rather than replacing, um, to that perspective.
And I think that, you know, people coming into the workplace, okay, and seeing all of the, the companies now. You know, using AI as an excuse to be able to shed large amounts of jobs, especially in the tech sectors. Okay. With Microsoft and others. Okay. So how, you know, the, we both, Philip and I feel very positive about this.
I know you do as well in terms of where the future is. So how, how do you see the, the future Okay. Of the, you know, people coming into the workplace and the future of [00:14:00] work from an AI perspective?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, it's a great question. Um, we actually just held a salon in San Francisco. Our first future of AI Salon series and we got together, um, some of the largest tech companies and um, other folks that are really passionate and thinking about this. And I think it's interesting because on the one hand you're seeing a lot of articles come out where it's like, oh, AI is gonna take 50% of jobs, and there's just these wild predictions on all sides.
And other people saying, well, actually no. It's gonna be a much slower transformation. This is all over-hyped and I think the truth is nobody really knows, like the data and the kind of predictive analytics of this time right now are, is very challenging because our models are changing and we don't really know.
How it's going to unfold. But I've heard from like some people in the space that they're rethinking [00:15:00] work itself from the ground up, and I think that's such an interesting idea because you're not just automating a piece of the workflow, you're rethinking the whole thing from the ground up. And whenever you're making such a big change like that, it can be a big opportunity.
It can be, like you said, you know, humans in the loop. That idea like how do we redesign this in a human centered way so that AI can support what the humans are doing and kind of amplify our unique, um, contributions, but also the productivity in ways that maybe we don't have to do the rote tasks that we don't want to do.
Or it can kind of create these efficiencies that make our jobs easier and hopefully better. Um, but I think it, it will be a challenging transition just because change is always hard and different people will be [00:16:00] affected in different ways. So I, what I care about a lot about is how we kind of are thoughtful about the transition and who gets left behind.
As, as these transformations of work are happening, and I think there's interesting work being done of how do we support society, how do we support young people? How do we support those that might be displaced in this transition? While at the same time we're innovating and we're rethinking all of these pieces. So I agree with you. I'm optimistic and I, I think. The future is ours to shape, and we have to be thoughtful about who is most impacted and how can we support them through this transition.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yeah, I, I think that impact is, is really important. I mean, you know, we, from an, you know, we always say that AI has to have an impact assessment. You know, whenever you build a solution of some sort or a system to be able to understand who are the, who are the people gonna be affected by it, you know, marginalized [00:17:00] groups, okay.
Society and so on, to ensure that they can be included during the build of a solution from the right, from the beginning, you know, all the way through to the end and, and the ongoing. Operation, and I think it's the same with with AI as a whole. Okay. As it as it moves into the workplace. Okay. There are a whole host of of people who, to, to your point about, you know, what AI for All is doing that you are working with right now who have fantastic ideas and who don't have to be constrained with the way that things have been done for, for decades or even, you know, especially with COVID and everyone moving out of, of offices so much.
Okay. A lot of things couldn't, you couldn't have been envisioned until suddenly they were in. Visioned and in so in so many ways it's the same, I think with AI, where you have a lot of opportunity to be able to rewrite the rules in a way that benefits society and benefits all.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: No. Well said. I think that's, that's the opportunity in front of us and great change always brings great opportunity. And you know, I think, like you said, the participatory approach [00:18:00] is so important because, and I love that idea of a framework, you know, and I think that's often missing from the conversation 'cause people are racing to create.
Things as quickly as possible and often not thinking about, well, what are the implications of this and having to go back on it later.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yeah. I, I, I'm gonna, one thing I sort of wanna drill here as well. I, I know that, that you personally, okay. Um, are into writing, producing, and performing a sort of cinematic dark pop. And I also know that AI for all has a, has a, a, a focus on creativity. And especially as you are, we're going into the idea of, of music, or especially with the augmentation that's now possible, um, uh, that's moved into creation of.
Still images now creation of, of moving images. Okay. And movies and, and the ability to be able to use this as a tool for creativity while also being concerned about, hang on a minute. Whose copyright or whose rights are being protected as you go through all [00:19:00] of this, where the people that you're working with, and I know that you put on, you know, uh, accelerator and, and other programs and so on to help people think through these things and, and courses.
Um, how do you, how, how do, how, I guess from the people that you're working with, from the, from the. Generation that you're working with, how, how are they seeing this? And are they, you know, are they, are they seeing this as a way of being able to, to do things they could never do before? Okay. And, and that they see that yes, there are barriers, but they can be worked through, or they, are they concerned about.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, that's a great question. Um. I just went to a summit about ai and one of the questions that we discussed was kind of like what, uh, kind of like tracing back different technological innovations and how what we can learn from that for ai. And it was really interesting because this. Organizer had included young people and sort of like all generations.
So it was multi-generational. And I think that, [00:20:00] um, you know, a lot of the folks in the older generations were very skeptical, where it was like, look at social media, you know, we all had this hope that it was gonna connect everyone, and then it ended up having these really, um, negative consequences. And I think there's wisdom in, in what we learn from that. But a lot of the young people were way more positive about ai and I think the opportunities and the creativity and, you know, there's like this sentiment of a lot of fear and, and fear of replacement amongst older generations of like, oh, I don't like ar AI art because it's, you know, it's really.
Potentially replacing like this whole swath of people and jobs and all those things. But I think young people are just, they have more fresh perspectives and my sense is, although it's certainly not like a. You can't really make, um, a broad [00:21:00] generalization. So there's lots of different perspectives, but it's interesting to get to hear from young people who are really feeling the possibilities of AI and saying, why, why are we limiting this based on past experience?
Like, let's think about this differently. And I find that really inspiring. 'cause it's like sometimes wisdom. Is important, but also sometimes like experience limits you because you think, oh, it's gonna be like that before, you know? And young people always have that way of breaking us out of what's come in the past and saying, this can be
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Absolutely.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: that's one thing we've learned the hard way very much here in building a radius, is that old norms don't exist. Right? We've literally have to unlearn a lot of stuff, and what's ironic is. Stuff that we learned in the eighties and nineties are very relevant to AI today, and, um, which is brings us a benefit of our experience of bringing that in.[00:22:00]
I wanted to have a, you know, steer the conversation a little bit towards bias. Right. And AI systems. 'cause you have spoken extensively a, about bias and AI development. So with all the rapid deployment, with all of these large language models and, um, and all the diversity, diversity issues that you've warned about, you know.
How are they manifesting itself in real world AI applications? In your perspec, from your perspective? And the other side to that is, you know what, what examples of AI bias have concerned you the most and how do we build better guardrails?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, no, it's a great question and I think we've seen a lot of different examples of this. Cropping up, unfortunately, um, where it's as simple as like in this searching for images of like. A doctor or a lawyer and getting predominantly male images coming back, for example, and searching for a [00:23:00] nurse or a, um, cook and getting primarily female images.
We've seen like examples of gender bias coming through these, these systems, and a lot of that has to do with how the data is trained, right? Because there is. That kind of gender bias in society where people have associated certain professions with certain genders. And so that's, if that's the data that's being fed in, you know, there's that whole saying, garbage in, garbage out.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: Oh yeah.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: And you know, there's other examples of, I mean, one that really concerns me is around hiring. And so there's been some examples of algorithms that have preferred, you know, Caucasian candidates, for example, over candidates of color.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Amazon's was a good example of that. You know, Amazon's best example, I think in so many ways, it's like if you train it on data that you've been hiring, you know, older white guys, then who knew that it, we were rejecting all of the, all of the diverse women media [00:24:00] candidates and women and so on, because they didn't match the profile that you had trained it on, so before they threw it out.
So, I mean, there are, there are so many examples of this. I, I completely agree.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, and I think that's something as we look, again, it's, it comes back to like. The humans in the loop question because we're outsourcing decision making. So if you're saying, Hey, AI algorithm, tell me, tell me who I should hire. And then there's like so many different steps that the AI is going through, where bias could creep in, whether it's like selecting, like how is it selecting, what criteria are you giving it to select?
How is it being trained to do that? Is it being trained on, just like you said, your data set internally? There's all sorts of pieces like that. I think also in the healthcare side of things where, you know, if you're looking at the results of certain tests or trials and you've only trained it on a certain group of people [00:25:00] and not looking at minority groups or different geographies or different, thinking about representation in the data set is so, so important in this.
Um, and also just being mindful of. I think you said it earlier is like human AI collaboration. So how do we want to actually work with the ai? Are we just comfortable kind of offloading a lot of the decision making and where is it that things can go wrong and how should we be mindful about that design process?
And I think it's easy to just say, well, if the AI can do it, let's just. Outsource it in that way, and it's unclear to me, you know, it requires a lot of deep thinking and critical thinking about how to design, like we said, reinventing work itself. So it's like you look at the whole process of work and you see.
Where's the AI going to help? Where's the AI going to hurt? [00:26:00] You know? And then you have the, to your, the name of your podcast age, agentic AI is a whole other set of questions. 'cause then you're really outsourcing like a whole process and not just a piece of it. Um, so I'm curious to hear what you all think about that and, and what you're seeing because it's, um, that's a whole other set of questions that it brings up.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: There are, I mean, I think there's a, there's a, you know, people I think get very different between, uh, uh, you know, get confused between agents and a gentech. And I think that, that they're very, very much two different things. And there's a big trend at the moment to try and create the, the, you know, using the LLMs or the foundation models to create a, a kitchen sink agent that does everything.
So, I mean, now this one, there not any slices. Bread and it slices bread, but it does the laundry. It, you know, drives to Costco, you know, takes to, you know, takes the kids to all their different things. And now suddenly it's, you know, it, it's doing everything. Um, and I think that the more and more things, you know, coming back to the safety side of it and the, you know, and the ethics, the more and more [00:27:00] things the.
You put onto it, the more there are chances that something will go wrong because the attack surface of this and the, and the, and the mistakes are, are become sort of exponentially greater the more things you try and get it to do. And I think, you know, especially as we don't know exactly how found you, you can never, you can never completely know exactly how a foundation model because of its very design came to an answer.
Okay. In that respect, transparency and openness is fine and you can disclose how things are happening, but the ability to, to look at this and think well explain. Sustainability becomes difficult because you just don't have explainability and there's no real chain of thought that will get you there because it's going to, it's gonna fake what a chain of thought looks like.
And so now you're into interpretability. So I think for, for agents, when the, the more you add into agents, the more difficult life becomes. And trying to find one Swiss army knife that. That that's, that's a, a therapist while also being able to, um, to, to be a, you know, a, a, a gourmet chef Okay. Is, is, you know, you, it becomes a little bit more [00:28:00] difficult when you try to have one size fits all.
I think with a gentech systems then we're into a very different situation because we're now talking about being able to, to create almost like a, a holistic system. That can work to solve problems in a variety of ways. Okay? And that does not require a kitchen sink approach to this. It requires a multi-agent system fabric that then allows you to, to sort of spread the workload across a diverse group of, uh, of different, uh, of different workers that can all you.
Be intelligent. They can all learn, but they can grow and you can focus them on individualized tasks. So your, your, your surface area is broader, but it is also then much more individualized to be able to, to be able to sort of focus on, on where, where things go wrong. And I think then the orchestration, the coordination of those when you're into.
10 million agents trying to do something at that point, then the, the complexities really start to, to grow and you [00:29:00] end up trying to understand how on earth to be able to have these things, okay. Work together, share entitlements, collaborate. Okay? All of those are, are very. Difficult, different things because the way that that you and your team work in AI for all is different than the way that Philip and I would work in Irid and different than the way people in GM would work or people in Disney.
And so the way that it shows up to, to help people, you know, agen systems to help people at home would be very different than it shows up in inside a corporation. And then it needs to be able to understand, well, this corporation has a different whistleblower policy than the other corporation. This one has a procurement policy that's different.
This one uses Slack. This one uses teams. So even the, even all of the. Nuances of how an employee works in an organization and the terms of service that they have to abide by at that point, now you are, you're into, well, do we need hr? Are we having, are we having agentic systems and agents as employees? Do you need a manager?
Do they need goals? Do they need performance reviews? You know, there's a [00:30:00] whole host of different things to go through here that, that, that I think amp the complexity of significantly, I dunno how you see this.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Well, that's really interesting. Yeah, it, I mean, what I'm seeing is this, you know, when, when we think about the workforce and kind of the transformation of the workforce, as we were talking about, a lot of the conversation is, well, now we need to train people to oversee AI systems. That becomes the new skillset where, like you're saying, as it gets more complex, that's actually a very challenging set of problems because you have to understand enough about how these systems work.
It is this kind of manage manager role, you know, and I think about that when I use chat. GBT is like. Some of the things I have to do with it are similar to like managing a team where you're like, okay, I guess I didn't give you like enough, you know, specific instructions and so what you returned back to me I'm not happy with.
And so now I have to think about, well, I wasn't clear enough in what I shared that with you [00:31:00] and the feedback process. So it really is like using an AI and making it work and very useful for you. The skills to do that are. In some ways transferable from like leadership and management skills. And then as you're kind of like you said, mapping out this more complex system of age, agentic ai, when you have multiple agents doing specific tasks, then that's, that becomes like more of an, you know, I guess like executive thinking, right?
Where you're having to think broadly about the system as a whole and then. Manage risks, you know, how do you put what into what place? What is your tracking system? So it's, it's really interesting that these are gonna be the skills that people need to succeed. Even though the AI is doing most of the work, there's still a lot of work to be done to ensure that it gets done well [00:32:00] and safely.
And there is kind of this oversight.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: now I'm just thinking there's a huge, you talk about the, you know, workforce development, but it all bottles down to education, right? And AI literacy. Right. And. And I, from stuff that I've read about you, you know, you have argued on several occasions about that AI literacy is becoming essential, right?
Something that as basic as computer skills, right? So what does AI literacy look like from your opinion? For somebody who will never build a model or, but needs to work alongside these AI solutions, you know, to augment themselves.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, it's a great question. I mean, I think it, it definitely goes beyond prompt engineering, which is, which is part of it, right? Because if you're gonna make it useful, you have to understand how to ask the right questions. And asking good questions is actually a very important skill that not a lot of people have.
And I'm not sure that our education systems really train people to ask [00:33:00] good questions anyway. So. That's for sure part of it. Um, and then I think it's like kind of understanding a bit of what we were saying before just about the explainability, the transparency. You don't need to have like in-depth technical understanding of these things, but you need to understand enough to know how it works so that you understand what's coming out.
Um, because if you can't think critically about that. I mean, we used the example before of like health data, right? If you don't know that, this is just like, all right, it's giving you the most likely set of answers based on all of this data that it's been fed. But it doesn't mean that it's, it can hallucinate, it can be incorrect. It's not a replacement for a doctor. Um, I think that's in incredibly critical, maybe more so than, I mean, you have to go to driver's [00:34:00] ed. To get your license right. A car is a pretty powerful thing that you can kill people with if you're not careful. AI is maybe, I mean, we could argue like right now versus with robotics and things in the future of where it's going, but you need to know enough to not make errors that are potentially harmful to you and others.
Um. I think that's different than other systems where it's like, well, I turn on the tv, I can switch the channels. I don't really need to know how it works. I can just use it with a button. But AI is different because of what it's actually enabling us to do and how we're using it and what for. So I think AI literacy has to go a bit deeper than just like how to use it and more like how it works and how it might affect you.
And you have to be equipped with
that.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: So back to the education for a second. You know, so I was on the board of, um, you know, our kids are, [00:35:00] uh, adults now they're in, um, but I was on the board and one thing the head of school said once. Jefferson, if he walked into a hospital today would be amazed at the, um, at the development and progress that's been made since, um, since I was around.
And he said she, if he walked into a school room, he would be utterly disappointed that it's basically still the same in as it was back then. So as you look at K through 12 education and beyond, what needs to evolve to prepare students for this AI integrated world? In your opinion? What needs to change?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: I think, you know, a lot of people are racing to figure out how AI can transform education. There's sort of like two, two pieces to it. There's AI. Education, how can people be become informed, users and consumers of ai? And then there's also AI in education, so how are students learning with ai?
And I think those are [00:36:00] two really important pieces that we're very much in the process of figuring out. To the first one, I think that it goes back to what we were just talking about, you know, the literacy, giving students a chance to understand how these systems are built. And you know, for example, computer science is still only taught at 60% of schools in the us, which is just wild.
That means 40%, almost half of the schools are not even getting exposure to this in high school. And that means, you know, it's very difficult to. You can do it, but it's much easier if you already have that foundational computer science and math knowledge to be able to study computer science in college and end up going into these fields.
So if you're not learning that and not exposed to that in high school, you're already behind by the time you get to college. And that means we're losing out on so many kids who might be really bringing a lot to this [00:37:00] field and just like. Getting that knowledge is so important to their, their own lives.
So I think like we need to just leapfrog that and teach AI at every single school. And there's no question in my mind that AI and computer science are essential skills and that it's, it's the literacy piece, but it's also giving a bit more of that depth and rethinking like what are foundational skills for the future?
And I think it comes back to this critical thinking and problem solving and questions as well. So in addition to like using the technology and having the technical skills, we are entering. An age of transformation, an age of change. We don't know exactly which jobs are going to. Come through. But we know that young people need to know how to ask the right questions, how to solve problems, how to be flexible and adaptable, and teach themselves things, and how to navigate such a fast changing [00:38:00] landscape of careers, of jobs, of technology.
And that would be my hope for how education can be transformed is less about the rote knowledge and memorization and skills, and more about these deeper. Kind of life skills that I think are going to be more essential than, you know, your very specialized like career technical education of the past.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yeah. Um, I, I couldn't agree more. And I think that, you know, when you, so the, the more of the rote that's going on, the less people are going to have the skills that are needed going forward. I mean, I always talk about this as being, you know, you have to be able to, to guide and have an interactive conversation, okay.
With some of these. Models to be able to get what you need out, which means you've got to be able to have an ability to, to have the intellectual rigor of that sort of thought process, to be able to guide it and coach it and look, seek what you're looking for out of it. But at the same time, and you were saying the same thing here, I always talk about [00:39:00] discernment.
What it gives back is great. Okay. But you have to be able to look at it critically. So, um, it, it, there are, there's a lot of situations right now where, where, you know, a lecturer, professor, school teacher will assign a, you know, will assign a 2000 word essay on something. Somebody asks chat GPT for a 12, 2000 word essay.
It comes back and it, and you just paste it straight in. There was, there was no thought process in, in any of this, so you weren't really learning any. Thing, but, but then the question is, well, well hang on a minute. What if what it came back with was, was completely inappropriate. Okay. What if you were asking for, you know, for a summary of Romeo and Juliet and it decided to go into, you know, Shakespearean love?
Okay. As a, as a, as an, as an analogy for this. So it wasn't the right answer, but it decided to go and pick a film and, and analyze that. And if you weren't even reading it or. Weren't able to discern whether what was coming back was right. I think that's that, that skill of discernment. And then to your point earlier, it's a bit like having a, you know, an intern that comes back all keen and has come up with something and then you look at it and go, well, that's not exactly what I was [00:40:00] looking for, which means I probably didn't give the right.
Nurturing the right guidance, the right coaching. So how can I help you be better to do better? And I think that sort of, those sorts of executive functional skills, almost like becoming a CEO of your own group of agents is gonna become super important Okay. For people in life going forward. Okay. E Whether it's medical to monitoring blood pressure and it coming back to, to, to whether it's just, you know, looking at finances and having your own sort of.
Little, you know, your, your agentic system to be able to go off and see how your finances and your investments are doing. You, you've got to be able to discern what's being done, not just outsource it and forget it and hope that it's coming back correctly.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, no, I mean, well said. And it's, it's exactly right. Like we're all gonna become CEOs of our own life and how well you can manage that is going to be to either accelerate you in in amazing ways or you'll be left behind because you just won't know how to [00:41:00] use these systems at all. And I think that's where school also like. Can you imagine going to college and high school when literally the entire, like all of human knowledge is just right in your pocket. Like it's, it's a totally different age of where all the information is out there. So asking somebody to spit back that information is no longer. A task there where you had to go to the library and like figure it out yourself.
So then the question is like, well, how do you actually build those skills of discernment, like you were saying, and critical thinking and high level executive functions. I think it comes down to experiences in the classroom, like using discussions, using oral presentations, using project-based learning. I mean, these are proven strategies that do build those skills and they're available to almost any age.
You know, obviously you scaffold them up over time. Um, [00:42:00] but I went to a college where we just had discussions and oral presentations and we had to present math proofs at the board, and it was absolutely amazing.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: But, um, you, you, when, when you talk about, you know, this is AI for all, there are so many, there are so many, all encompasses all, so there are so many schools out there that are not even teaching computer science even in America. But then you go way beyond to globally. There are so many schools.
That don't have the, the scaffolding framework to be able to provide those skills. Okay. And I think that's where it ends up, you know, becoming, you know, quite important for organizations like yours to be able to provide that additional support. Okay. And help people develop those skills that may be, were not provided even in some of the, you know, the better universities worldwide at the moment while they go through this own transition themselves.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, exactly. There's gonna be a need for. More like faster. I mean, education systems are slow to change because they're often [00:43:00] complex and highly localized, especially like high schools, right? In the US at least. And colleges too. They're massive institutions. They're, it's not, you know, a tomorrow thing.
They're going through this transformation themselves and figuring it out. So we're gonna need these like alternative systems. Like AI for all and other similar programs that are out there to help fill in the gaps. And I, I think also companies have a responsibility to re-skill their own employees and think about that.
And I'm sure that they will because it's in their investment, in their benefit to do so. Um, but we have to think bigger about this problem because the transformation is happening now and kids need to be prepared now and adults as well. I mean, I think it's not just. For young people. It's also, I think about, I don't know, workers who have been in their jobs for a long time and what are, how are they going to get skilled up in this way if their [00:44:00] job isn't providing it?
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Exactly.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: So as we wrap up here, thank you so much for every, uh, your, this time that you've spent with us. Really been an in-depth conversation and love it. But we'd like to finish off with some rapid fire questions. So just quick, what one app AI application are you most excited about, Tess?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Ooh, most excited about? Um, I think I'm really excited about AI and healthcare. I think there's so much potential there. And having gone through some. Uh, health challenges personally in my family this last year, I think I see the potential of revolutionizing that revolutionizing care effectiveness, um, greater speed for medical and drug research breakthroughs.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yep.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: And you are very much into music. You are a music artist. What song or artist is currently inspiring your own music?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Ooh, good question. [00:45:00] Um Well, I'm a big fan of, um, Florence and The Machine
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: Oh my gosh. Me too.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Yeah. Yeah,
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: Yeah. Love Florence and the machine. Absolutely.
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Well, did you see? She just teased some new music, so I'm excited about
that.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: Oh, I didn't see that on Melu. Okay. All right. That's fantastic. And then finally, what do you like to do for fun?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Oh, wow. Well play music. That's like my most fun thing actually. Um, also mountain biking. I live north of San Francisco in the Forest Mountain, biking, hiking. Um, but yeah, music is kind of my obsession, so every, every chance I can get, I'm either playing or creating.
riverside_alistair_lowe-norris_raw-video-cfr_the_agentic insider_0035: Great. That's great. We share our passion for hiking then. That's fantastic. And Tess, how do people reach you if, uh, if they would like to reach you? What's the best way?
riverside_tess_posner_raw-video-cfr_the_agentic insider_0034: Yeah, so they can go to our website, AI for all.org. Um, they can find us on social media, AI for all. Um, connect with me [00:46:00] personally on social media, on LinkedIn. Would love to connect and, and reach out if you wanna learn more about our work or hire a young person from one of our programs. Or maybe you have someone who's listening may wanna take one of our programs, our AI education program.
So please reach out to us. We'd love to connect.
riverside_phillip_swan_raw-video-cfr_the_agentic insider_0033: Tes Posner, thank you so much for being such an amazing guest to our audience. Thank you very much for joining us. I'm looking forward to seeing you next week. Thank
you.
Speaker: That's a wrap on this week's episode of the Agentic Insider. Thank you for listening. For show notes and more, please visit AgTech Insider Show. To learn more about Arids visit arids.ai. We'll see you next time.