Pondering AI

David Ryan Polgar makes the case for value-driven technology, an educational renaissance, passionate disagreements, mindfully architecting our future, confident humility and progress over perfection.

Show Notes

David Ryan Polgar is the Founder of All Tech is Human. He is a leading tech ethicist, an advocate for human-centric technology, and advisor on improving social media and crafting a better digital future.

In this timely discussion, David traces his not-so-unlikely path from practicing law to being a standard bearer for the responsible technology movement. He artfully illustrates the many ways technology is altering the human experience and makes the case for “no application without representation”.   

Arguing that many of AI’s misguided foibles stem from a lack of imagination, David shows how all paths to responsible AI start with diversity. Kimberly and David debunk the myth of the ethical superhero but agree there may be a need for ethical unicorns. David expounds on the need for expansive education, why non-traditional career paths will become traditional and the benefits of thinking differently. Acknowledging the complex, nuanced problems ahead, David advocates for space to air constructive, critical, and, yes, contrarian points of view. While disavowing 80s sitcoms, David celebrates youth intuition, bemoans the blame game, prioritizes progress over problem statements, and leans into our inevitable mistakes. Finally, David invokes a future in which responsible tech is so in vogue it becomes altogether unremarkable.

A transcript of this episode can be found here.

Our next episode features Vincent de Montalivet, leader of Capgemini’s global AI Sustainability program. Vincent will help us explore the yin and yang of AI’s relationship with the environment. Subscribe now to Pondering AI so you don’t miss it.  

Creators & Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
David Ryan Polger
All Tech Is Human

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

[MUSIC PLAYING]

KIMBERLY NEVALA: Welcome to Pondering AI. My name is Kimberly Nevala, and I'm a strategic advisor at SAS. I'm so excited to be hosting our second season in which we talk to a diverse group of thinkers, doers, and advocates all striving to ensure our AI-enabled future puts humans and the environment first.

Today, I'm beyond pleased to bring you David Ryan Polgar. David, as many of you likely know, is the founder of All Tech Is Human and a leader in the quest for human-centric responsible technology. David is also well-known for his work on improving social media and crafting our digital future. Thank you for joining us, David.

DAVID RYAN POLGAR: Oh, Kimberly, I'm excited to be here. I mean, especially when you talk about aligning AI with our human values. I mean, it's one of the most important issues out there. And I think that's why we always-- both of us run into such a broad range of passionate people because this is a really important topic.

KIMBERLY NEVALA: Now, you have a legal background.

DAVID RYAN POLGAR: I do.

KIMBERLY NEVALA: Can you tell us what inspired you to wend your way from the legal arena to founding All Tech Is Human.

DAVID RYAN POLGAR: Yeah, well, I will say it's a hidden secret, right? It's a little secret. Naughty secret that there's actually a lot of attorneys involved in the responsible AI and the responsible tech space. They don't always put that out there. One, because there's a lot of associations, sometimes not always positive, that come with having a legal background. But what I will say is being an attorney really has you steeped into thinking about ethics and responsibilities, specifically business responsibility and its impact on a wide group of people. There's a lot of critical thinking.

So from my own journey-- my personal journey-- I saw early on, probably about 2012 2013, it really resonated with me that the technology that we are creating specifically around AI was altering or is altering the human existence. It affects how we live, love, learn, even die. That's a big deal full stop.

It alters the human condition. It alters how we get the news, the jobs that we see or don't see, how we interact with one another. It shapes our worldview. So we need to be extremely thoughtful about how we develop and deploy these technologies. And I think it was a real lost opportunity to have such a limited sliver of society be the ones who are developing and deploying these technologies.

If it's impacting you, then you should be involved in the process. I really see the creation of or just technology at large as being intertwined with the Democratic process, right? So I jokingly always like to say that no application without representation. That if you are being impacted by technology, then you yourself should have an ability to impact its creation. Because it's affecting you. And if you're affected by something, you really want to have that sense of agency.

So yeah, I was able to really leverage some of my legal background to bring in a greater amount of thoughtfulness. And I will say, where I always saw that as a potential advantage is one of the knocks on the tech industry at large is the classic case that comes from Facebook of move fast and break things. And although they've moved away from that original motto, that obviously comes up in a million thought pieces around variations that we should have.

So that's the major difference and major reason why you need actual people who are going to be more naturally thoughtful with it. Because one of the exercises I always give to people is imagine you have a tech founder and an attorney in the same room, right?

KIMBERLY NEVALA: Oh boy.

DAVID RYAN POLGAR: Let's make it a cocktail party. And you'll notice that at the cocktail party, they'll approach the issues from a different perspective. They have a different disposition. The tech founder is going to be generally overly enthusiastic. Optimistic. Some might argue that rose-colored glasses, right? But that's also what you need to have as a tech founder. So that's not necessarily a bad thing. Because America has done well, specifically, because of its ability to have such bold entrepreneurs. So that's a good thing that we should always champion myself as an American.

But what I will say is the attorney on the flip side will say, hey, tech founder, I'm glad you're excited. I'm glad you're seeing all the positive ways that this technology can be used. But have you thought about its unintended consequences and negative externalities and a way that this could go sideways? Are you thinking about bad actors in ways that this could be exploited by authoritarian governments?

So what we've seen in recent years with a lot of media attention around some of the ways that AI has been abused or has gone sideways or could go sideways, it's really a lack of imagination I see. Because any time the tech founder says, well, we couldn't have imagined this, that's not entirely true. What that statement actually means is that we haven't involved people who were pointing out these problems in the process.

So I've always seen with my own work dealing with tech ethics, doing a lot of writing, speaking, consulting, and then founding the nonprofit All Tech Is Human in 2018, is that we need to inject the system with greater diversity of thought, of perspectives, of backgrounds. And I really come back to the phrase that diversity breeds responsible innovation.

In other words, responsible innovation, by its very nature, is only going to happen if we include multiple perspectives that can start pinpointing areas that we're missing out, areas that we're overlooking. And I think it's also key to point out-- because I think this is oftentimes missed in the conversation is that we are not trying to create a superhero, right?

KIMBERLY NEVALA: Well, David, we all certainly like a good superhero. But if we're being honest, they are both hard to find and often quite temperamental. So what's a more appropriate mental model?

DAVID RYAN POLGAR: I hate to burst the Marvel universe, but there are no true superheroes. And I don't mean that in a bad way, I actually mean that in a good way. I mean that in a very collective manner of this is about coming together and creating a process that I might try to be well-read and talk to more people and have greater perspectives and be more holistic in my thinking.

But I think I, as an individual, also need to recognize my own limitations. So yes, I do have a varied background. This is my life. I read umpteen hours a day on this material. But that doesn't mean that you would want to say, OK, well, let's just have this one chief ethics officer or let's just have this one individual or let's make this one tech founder more ethically inclined.

That's good. However, the issue is not that we need to have a better one person-- this one person who's going to have so much power. That concentrated power is actually part of the problem and that you would want to diversify this power among multiple people in a larger process. And that process is going to include so many different backgrounds because tech today is not the tech of the past. And what I mean by that is that tech is intertwined with society.

And when we say tech, what we mean is society and politics. We don't mean this is like some type of utility that you're just receiving and we can just leave it up to technologists and we can either buy it or not buy it. That's the wrong framework, in my opinion. We need to think of it more as everything that's being created today is altering our tomorrow. Therefore, we as the general public, in order to make technology align with the public interest, the public needs to be more involved in the very process.

KIMBERLY NEVALA: Now, I've observed that we seem to be veering - and sometimes within the same conversation or in the same day - between a collective sense of excitement about what we can do with these technologies and a collective sense of terror about what might occur as a result of these technologies.

And I'm interested in, where do you see us netting out? And how can we actually start to engage in a better conversation that really balances it out? That there's not-- it's not a dystopian future and it's also not a utopia. As technologists, we have been appropriately accused of having rose colored glasses. Technology will solve-- yes, there's problems, but we'll solve that with technology. Yeah.

So in the work that you do, how have you seen that conversation evolve and how do we really think about having a better conversation?

DAVID RYAN POLGAR: Well, what we need to realize, in my opinion, is that we face very serious public decisions in the coming years dealing with infrastructure and the environment as we've seen just in recent years, specifically around the environment. Aspirations of going to Mars aren't going to matter if our own planet Earth is not sustainable, let's say, for the next 200 years.

So there are very immediate issues that we need to think about. However, what's always difficult about creating change is that there has to be an ability to change something. And that, oftentimes, an individual can feel overwhelmed. So even around the climate discussion. A lot of times people do throw up their hands because they think, well, I'm one individual. What impact do I have? Maybe this is just on a course. Maybe my action doesn't affect something long-term.

Whereas, with the tech conversation, in the immediate future, the biggest thing that needs to change from my perspective is that we need to realize that, well, all tech is human. And what I mean by that is that our tech future is not written. Our tech future is not predetermined. And I point that out because that sometimes is the perception that we have that we think that our tech future is as a runaway train. It's out of our control, right? That singularity is near. It's going to happen at a certain point. And this is when robots are going to gain consciousness and take over and turn us into pets, right?

KIMBERLY NEVALA: It's a fait accompli, right?

DAVID RYAN POLGAR: It's already done. Then, that, in itself, would rob us from our sense of agency. Whereas my whole life and my work that I do with All Tech Is Human is all focused around it's up to the decisions that we make or don't make. And it's up right now to us coming together with multiple stakeholder groups. And that's oftentimes what doesn't happen.

So all these conversations might happen-- you have a lot of tech workers from big tech companies. They're talking about this. And then, you have policymakers. They're getting involved with this. And then, you have academics. They're doing amazing research on it. Talking to college and grad students and law students. They're really interested in this. They want to get involved.

But what needs to actually happen, similar to the political process, is that if you were deciding, hey, do we want to build a new bridge or a high-speed rail or something like that? You wouldn't just talk to the engineers, right? You would talk to the sociologist. You would talk to the urban planner. You would talk to the people who live nearby and would be impacted by it. It's a holistic process.

So technology, right now, it isn't predetermined. It's going to be determined by what we do. So that is the call to action for other people to get involved to say, our tech future is only as good as what we're creating today. The guardrails that we create or don't create. The rules that we decide to make or not make. The engagement that we have or don't have.

And that's what is motivating people to say, all right, I want to be part of the process. The catch right now is there needs to be an ability for people to come together to co-create to collectively have these discussions. And also to have some type of forward movement.

Because I will say, Kimberly, that I wish it was easier. I wish there was a button that we could push to say, I want to make sure that AI doesn't get out of control. And I can just push the button. And then, magically, everything is fine. I wish there wasn't any algorithmic bias and I just hit a little button.

If only it were-- it was that easy, but it's not. And it's also because we might disagree about what an ideal future looks like, right? So you need to have all of these different voices involved in the process that can debate. That can feel safe about arguing because these issues are worth debating and arguing over and coming up to some level of a consensus.

And really, our tech future is about having the ability to realize that these are complex, nuanced areas that have inherent trade-offs.

KIMBERLY NEVALA: This is such a critical point. There have been several cases recently which highlight the dichotomies you referred to. Will you share a recent example that highlights the thorny nature of these issues?

DAVID RYAN POLGAR: Specifically, if we think about a recent issue dealing with Apple and going after CSAM. So it's a really problematic content.

KIMBERLY NEVALA: Which, on its surface, why wouldn't you?

DAVID RYAN POLGAR: Why wouldn't you, right? So on its surface, you'd say, well that's a slam dunk, of course. Go after this. But then you think, OK, well, now what's the trade off in terms of privacy?

And we noticed that for one of the working groups that we had with All Tech Is Human is that when we were debating even our social media future-- improving social media-- that was the name of the report. People widely disagreed and passionately disagreed. And disagreed with data and disagreed with conviction, right? Our tech future is not a homogeneous being, even when you look at just even the rise of the web in general. There's massive disagreements about what the web should even be. How it should evolve. How private it should be. How commercial it becomes, right? How anonymous somebody can be or can't be.

So there's so many debates that happen underneath the surface. But what is really needed and what I get motivated about is we need to find an ability, a place, a space to come together and have these debates. Because it's not always easy, especially when you're talking about responsible AI. Now, you need to involve policymakers. Well, the way that we've set up our political bodies in the United States, it's not always set up to have them move quickly. Tech moves quickly. Our policy doesn't move quickly.

KIMBERLY NEVALA: It does not.

DAVID RYAN POLGAR: It does not, right? So how can we think about these issues? How do we incorporate the knowledge base of all these great researchers, all these great people who are doing amazing work in AI-- responsible AI, specifically. How do we incorporate them to have a pathway so they can inform these policymakers? What systems do we create that allow us to actually have a good amount of oversight and engagement so it's not just kind of reactionary so that our policymakers also are kind of like well-situated in the space as well?

So I think it's going to need to be a space where there's almost like a playground of sorts where all these different groups come together. So I know even with All Tech Is Human, we recently had an online responsible tech summit. And I was just blown away by just the diversity. And I mean diversity in all senses of the word that come together.

So you would see the big tech person with the startup person with the policymaker with the student with the artist and the designer and the activist. And they should all be in the same room, similar to how they should be in the same room when we debate anything dealing with policy. Because our tech is our future, is our policy, they're all one in the same.

KIMBERLY NEVALA: Yeah, and I suppose some of the good news today is that awareness of these issues is at an all-time high. You mentioned the ongoing debate around Apple's approach to CSAM. Can you talk to some of the other seminal moments or events that are really driving and sustaining that widespread awareness and concern today? And then, maybe we can talk about how we make this practical.

DAVID RYAN POLGAR: There's always a few these watercooler moments.

So a few years ago, you had Cambridge Analytica, which really was such a pivotal watershed moment for greater involvement around understanding the traditional business model. Specifically, with social media platforms. But I think it also brought to light the larger issues around business models that might be geared more towards advertising.

And then, how that sometimes creates certain perverse incentives, right? Who is the end user? Is it the person or is it the advertiser? Who does the tech company-- who are they working for? Recently, I would say that the two movies, Coded Bias and also The Social Dilemma, both on Netflix, really brought forth a lot more conversation.

It always seems to take a good book, a good Ted Talk, or a good movie that really brings these conversations. But I would be remiss if I didn't also mention Black Mirror, right? Because the popularity of Black Mirror also brought a lot of these very complex thorny issues to a moment where we could have a conversation socially.

So I do think that pop culture does play a very important role in influencing people. Actually, they've always shown that with any level of social change is that, oftentimes, it's a TV show that actually influences people because it normalizes behavior. It normalizes a conversation. That is really key. So the attention is sky high. And then, just going back to The Social Dilemma and Coded Bias.

But now, where I think where we are is we are saturating the kind of problem stage. So if everything goes through like a problem cause solution. Now, OK, there's a lot of debate or what causes these problems. But I will say there's a hunger-- and this is the space that I kind of deal with-- of how do we actually get our hands around this. How do we move from just pointing our fingers because-- and blaming?

Because it's very easy right now. There's a lot of blame. There's a lot of people who are upset. So you can always say, well, I wish the tech company was more socially responsible. And then, you can pointer finger and say, well, I wish the general public-- how can we increase digital citizenship and digital literacy and media literacy for the general public?

And then, you could also point your finger and say, well, how can we empower tech workers, right? They're the ones creating it. So maybe they're their last stop. But then, you can also point your finger at the media and say, well, how can we change the narrative? How can they inform the general public?

So we oftentimes ask-- and this happens with so many conversations I'm in where it always leads to the same question. They say, well, David, who's to blame? And I say, we're all responsible, right? Everyone is playing a part. So it's really like systems thinking that there's a lot of interlocking parts when we deal with responsible AI.

And if we're going to successfully move to this solution stage, we're going to have to put aside a certain amount of blame. Obviously, you still want to hold people accountable. But you also need to reach a point where we say, well, we need everyone involved.

KIMBERLY NEVALA: It can certainly be easier said than done, particularly when you adamantly disagree or feel an entity has somehow shirked their responsibility. How do you address this in your work?

DAVID RYAN POLGAR: I can't tell you how often it comes up with All Tech Is Human where somebody might get upset and they'll say, well, how dare you? I mean, this does happen a lot. How dare you include someone from-- pick a large tech company-- on this panel? Let's imagine.

And I always actually have to push back on that to say, we're not certifying anyone. What we're doing is we're creating a space where different ideas can come together. Different perspectives can come together. Because we can't improve things if we're just writing something on medium, right? And that's good. It's good to be active and write.

But if we're truly going to focus on action, we need to have activists and tech companies, active tech workers, policymakers, media, so we have a more thorough nuanced media narrative, we need all of those parts to come together. And that is the part that is always missing. And it's something that I'm seeking to change. Easier said than done, but we're also having a lot of attraction as of late.

KIMBERLY NEVALA: Yeah, and if there's one theme that we hear repeatedly, it's certainly about this idea of co-creation. It's about the need for diverse collaboration, ecosystems, and stakeholder groups. You do a lot of work with and on behalf of youth. In fact, I believe you have an upcoming panel on a youth led response to the Social Dilemma, which you just mentioned.

So I'm wondering if there are some lessons we can learn about productive, appropriate, constructive debate and discussion from and through your work with youth? What has that taught you about how we can better engage diverse stakeholders productively. Especially or I was going to say even - but really, especially - when their perspectives and their experiences dramatically diverge from our own?

DAVID RYAN POLGAR: That they do. I always like to say that as much as I try to get in the brain of an 18-year-old, this isn't an '80s comedy where we both get struck by lightning on the streets of Manhattan and then we switch bodies. And then, all of a sudden, every trend that an 18-year-old is into, I totally just get because I'm in their body.

We have to be careful that we don't talk for a group. So specifically, if we're speaking about a young generation: Gen Z, if you will. We need to ensure that they're brought into the process. That we can knock down some of our preconceived notions. Because oftentimes, we try to hear what an 18-year-old is saying and then filter it through our own brain and our own experiences, which are going to be different. Because every age is going to be filtered through a level of experience about how saturated they were in this current tech exposure or climate.

And one of the things that always blows me away-- and this is why I'm actually very hopeful of the future-- is that when I speak to college students and grad students and a lot of law students as well, so we have a lot of those folks in our All Tech Is Human community. And we recently started a Responsible Tech University Masters program as well.

So I'm constantly in touch and talking to these individuals. And there's a sea change that's happening right now. And the sea change is that, at least in my takeaways from my conversations with 18, 19, 20, 21-year-olds, is that they naturally into it so much more nuance around the thorny kind of tech and society issues than other generations might.

And I think that's because of just their lived experience about how innate this is. In addition, all of these scandals of recent years, these watershed moments, well, the burden is actually going to fall on them. So sometimes, I think they're a little upset because they want to say, well, I want to make sure that I have a good future. I want to make sure that everything is good for me as well.

But I think they're rolling up their proverbial sleeves to say, well then, we need to actually create jobs that don't exist today. We need to make titles that don't exist. We need to have a more hybrid character. And this, Kimberly, I'm seeing is an area that it's going to need to move in the next couple of years, but it's also not going to be easy.

And let me unpack that statement. Is that what I'm seeing from talking to a lot of college students today, they're looking at everything going on with responsible AI and they're saying, OK, well, in order to grapple this, need to understand a little bit of psychology and law and soc and history and also kind of some of the technical aspects. In other words, the job title of the future, if you're going to be in responsible tech or responsible AI specifically, you need to be an amalgamation of so many different perspectives and different disciplines. You need to be a unicorn, in other words.

So responsible AI needs unicorns. But-- this is the big but-- is that what I'm learning from a lot of college students and grad students is that our traditional structure for getting a very multidisciplinary varied experience is not always set up. Because our general structure of university settings is still going to be, well, you're choosing a major. Yes, you might be a double major. You might have some exposure. But where I am bullish on, where I'm going to say we need a lot more attention, is an ability to think outside of traditional disciplines. And some universities have done a lot of work in the space of trying to say, can we really break these boundaries? Because they don't work in the 21st century anymore, right? We actually need to have this varied experience.

That's where, I think, there's going to be a lot of attention. Because responsible AI, they don't need-- the strict technologists. You wouldn't want somebody who's, let's say, a philosopher who comes in and doesn't understand the underlying AI. So you need that amalgamation. You need the varied experience.
And that varied experience, oftentimes, is difficult for people to receive. And they're looking for ways of upskilling, right? How can I upskill to a point where I feel qualified of my varied experience? I think that's what I'm learning from Gen Z right now.

KIMBERLY NEVALA: So to be clear, though, we're not suggesting, necessarily, that individuals need to turn themselves into a multivariate - I think we've referred to it as an ethical unicorn . Someone that can actually understand the technology, understand sociology, understand people's behavior, understand the ins and outs of policy and law and tech. Certainly, having a broader perspective and viewpoint will help. But is it really about somebody having to bring all of those perspectives themselves? Or them being open to and able to engage in the collective amalgamation?

DAVID RYAN POLGAR: Yes, it's actually both. And I'm glad you asked that question.

Because to your point, right? You don't want to make that one person the be all, end all. Because we were saying earlier that's not going to happen. But what you would want to have is your core competencies. And then, your base level of understanding.

So really, what I'm getting at is that these ethical unicorns, they're going to come in and work in a larger process. And they might say, OK, my specialty is, let's say, I'm a psychologist or something. Really needed in this space. That's their specialty. But then, they're going to have to have an understanding and appreciation-- familiarity with so many different other aspects in order to be part of this larger collective varied multidisciplinary process.

So one of the things that we did recently with the release of our new and updated responsible tech guide is going through all the interviews of all these career profiles. Because we do these career profiles-- who are these ethical unicorns? And I was really struck by a reoccurring statement that everyone in their interview-- almost down to a T-- says the exact same thing.

They say, well, I had a very non-traditional career path. It was atypical. I had a really diverse experience. I have a non-linear career path, right? That is actually the value that we're seeing is that the people who really thrive in responsible AI and responsible tech are the ones that can start gathering so much. Obviously, you can't be a specialist in all these fields. But you're gaining insight and appreciation.

Especially a lot of the work I do in the social media space around trust and safety. So many people have a background in law and policy. They might have said, hey, I spent a couple of years working for Twitter. And then, I was at the White House. And then, I was overseas, right? I say, oh, that's really interesting. But then, you actually find that type of experience is typical. Because in that type of role, you're dealing a little bit with policymakers, you're dealing with communications, you're dealing with the technical aspects, right? You're dealing with law.

So oh, OK, you need to have a real baseline understanding of a lot of various fields. And that's very typical for a lot of people. So that's really what I'm getting at. You can't be a specialist in all those, but you can seek out more and more experiences.

And I would just-- I'm an advocate for universities-- it's easier said than done-- but we need to start moving that away from a strict discipline and more towards this varied degree. And I saw that specifically with one of our Responsible Tech University ambassadors where she said in her profile that she really kind of struggled. She had to make a unique kind of education experience. And that, how do I find this? How do I find my advisor? How do I find my resources with it, right? Because that's what we need to do. We need to start creating a new field. A new level of experience because that's what responsible AI is. It's
various. It's various disciplines coming together and in a melting pot, right? So it's fascinating.

So yeah, there's a lot of work to be done. But I'm also, like I said, enthusiastic to see how much work has already been done and just how active and passionate the community is.

KIMBERLY NEVALA: It strikes me that we need to have a Renaissance of Renaissance education. Where we embrace that idea of a person who understands and is open and educated and engaging in a lot of different conversations in a lot of different disciplines.

Now, that's clearly something we're going to be looking at that's a little different than a traditional career path. But one of the other things we've been pondering is, when we talk about AI and we talk about it in this context of responsible or ethical technology, we tend to look at AI as, yes, it's a novel new technology. And there's an implication that it then creates a distinct new bubble, if you will, in the AI ethics or the responsible tech space.

And I'm wondering, in your experience, are the challenges that we're facing that are being raised by AI really new? Do they require new approaches to address? Or are some of these pre-existing challenges that we're just seeing at of scope and scale we weren't aware of before?

DAVID RYAN POLGAR: The are definitely pre-existing. And this is why we need people who understand history and can learn from that. Involved in the process. It's very similar in nature to how we think about parenting.

And what I mean by that is, oftentimes, somebody could say, well, how can the parent understand their child's experience when they didn't grow up with Tik Tok. They didn't grow up with a lot of social media.
And it's because even though those are new, it's the social and emotional learning parts that aren't new, right?

So there's parts that we can always find that are analogous. And it's the same thing is that when we're pondering AI, we're actually pondering equity and fairness. We're pondering future laws and how it impacts people. Those are questions that we've been pondering for thousands of years as humans. And by recognizing that, it opens us up to demystify a little bit of the AI and bring in people who want to be part of the conversation but constantly feel left out.

That's a part that I really want to emphasize because, so often, in my career, that's who I'm running into at All Tech Is Human. It's people who feel like they have an expertise, they have an insight, they have a passion, but it doesn't have a pathway. It doesn't have an ability to be used because somebody might say, well, you don't have a specialty in AI. So why are you having a conversation about it? Why are you offering an opinion about how we can improve our AI future and how we can be more responsible with AI? You're not creating AI.

That's what we need to move away from. That is a voice and a voice that's talking about their future. And AI is impacting their future. Therefore, their voice should be heard. And it's not that that voice needs to overwhelm that of the traditional creator. It's that it needs to be part of that overall process, to inform, to engage, to interrogate, to think better, to ponder.

So the more we ponder AI, I think the better off we'll be to go to the earlier point. AI is not just a runaway train. It's not creating itself. And it is up to what we're pondering today. And that's why I'm thrilled to have this conversation with you. Because we need to have more conversations like this. And we need to have more people debating this. Discussing this at length.

Unfortunately, these aren't quick little two-minute conversations that we can have on a lot of TV programs. It works well, I think, in a longer form like podcasts where we can really deal with the consequences, deal with the struggle, deal with the people that we need to involve. So yeah, there's a lot we need to do. But the fact that we're pondering it, I think, is a major testament of a recognition that AI is important and it's so important that we should be involved in its process.

KIMBERLY NEVALA: Now, for someone who is involved in creating AI solutions or AI strategy or trying to ensure that their organization shows up in a responsible and ethical way, I can imagine they're saying that's great, Kimberly and David, co-creation, collaboration, diverse perspectives, systems thinking. How do I actually do that? Are there a few nuggets you can leave for organizations about how they can start to practically embed and practice some of these concepts?

DAVID RYAN POLGAR: Yeah, well with All Tech Is Human, we had a report recently called ‘The Business Case for AI Ethics: Moving from Theory to Action’. And one of the things that we really discovered there by having this large working group of over 100 people was that to operationalize AI ethics, we really need to ensure that it's a multi-pronged approach that's considering the leadership buy-in. At the same time, the empowerment of a tech worker.

One of the challenges that really seemed to come to the surface there that I think was holding back a lot of how we operationalize it is, one, AI ethics, it's very easy to get caught up in the weeds of semantics of debates. I mean, you see so many flame wars kind of happening around AI ethics and around AI ethics.

[INTERPOSING VOICES] Is it ethics? Is it responsible AI? Are we ethical?
DAVID RYAN POLGAR: Right.
[INTERPOSING VOICES]

DAVID RYAN POLGAR: And then, we're debating it. And then, people are wondering about who is this person to be able to talk about it? Who is this company? And you really can get into a lot of debates.

Whereas, at the end of the day, we're trying to move the needle, right? We're trying to create a better tech future. Improve our AI to make sure that it's more fair and just and equitable to us as individuals, respecting our civil liberties, and also democracy at large. And now, everything we're thinking about there.

So that is going to be key of saying, how can we move away from-- hey, is it just about the employee? How do we give them tools to really be responsible technologists or is it all just about leadership? If they don't buy in, then none of this matters. Whereas, I would say, it's both, right? You need to have that as a part of the larger process.

Because as we've seen in a lot of recent cases, if you just have an empowered employee who doesn't feel supported by their employer, one, they're going to feel disenfranchised. But two, they might get fired, right? That's a problem by itself. Is that one of the tensions, I think, that's happening is responsible AI attracts a very kind of-- a crowd that might be used to the academic freedom of working at a university.
Whereas, working at a university versus working at a Fortune 500 company, those can be different in terms of company culture, in terms of academic freedom. That's, I think, a natural tension that's happening.

So I think one of the issues that's probably going to happen in the coming years is going to be, how do companies become more OK with constant accountability and transparency? I think this is the struggle that social media companies are going through, too, is that all right, how do we make ourselves OK for being interrogated? OK for the fact that this isn't a perfect system?

And I think that's why the larger debate around recognizing how difficult these issues are, I think, it goes a long way for any company that is trying to be more responsible. To also be humble in the fact that it's an imperfect system. That you're going to make mistakes. And I say that because if you set yourself as a company on a path where you're basically saying, OK, now we're going to be bulletproof, right? We're just going-- every decision is going to be right.

Well, a lot of these discussions are very malleable. They're evolving based on changes in technology, changes in social norms, right? There's a lot of unknown factors that are playing a part. And the more that a company can go in there and say, we are recognizing that we care deeply about these issues. We also recognize that ethics is part of the DNA of the company. Yes, we might have a chief data ethics officer or something. But we also need to incorporate that throughout all the different parts of the product process.
That's going to go a long way. Because it's saying that, we might make a mistake. There might be a problem that happens. You might disagree with what we do next year. But we're also going to plug-in to learning, to reacting about how people feel about something, and being able to kind of adjust accordingly.

So I think it's that part is going to be key. And then, going up with the bottom-up approach as well. Creating tools and empowerment for a lot of tech workers to realize this is a large base. They're looking for commonality. And I will say with my experience with All Tech Is Human, we have a lot of tech workers who take part in these events and projects. And one of the benefits they receive that they tell me about is that they like the fact that they find other people who deeply care about this. Everyone is looking for some level of validation and support because it also allows them to go back to their employer and say, well, actually, a lot of companies are dealing with this. Wow, I just took part in this Responsible Tech Summit.

And I heard so many people talking about this. It affects company culture, too, because they're realizing that it's not a few just lone wolf rebellious characters. That eventually, if we talk five years down the road, we might drop responsible AI. Because it will be so embedded in our culture that to-- you're either going to have AI that's responsible, and we just call it AI, or you're going to have irresponsible AI, and those are companies that should be penalized by individuals and maybe even our legal system, right?

So that's where this is going. Where responsibility is so in vogue and so important and actually has a clear ROI that it's eventually just going to be ubiquitous. And that's, I think, the end goal is that everyone should be responsible technologists. Everyone should be in responsible AI.

KIMBERLY NEVALA: I think that's a great note to end on. One of the things that I took away from that is that we should all proceed forward with confidence and we should proceed forward with confidence that we're going to get it wrong. And that is part of the process and that we will all embrace that as part of the
process.

DAVID RYAN POLGAR: Embrace the change. Embrace the messiness. Responsible AI is messy. Let's roll up our sleeves and let's get messy, but let's make positive change. That's what we're doing here.

KIMBERLY NEVALA: Yeah, that's amazing. Well, thank you, David, for that amazing insight and also your optimism that does come through even as we talk about these thorny issues and how we can, in fact, achieve a better tech future for all. I really appreciate your thoughts.

DAVID RYAN POLGAR: I love being on the show and let's continue the conversation and bring more people into the fold.

KIMBERLY NEVALA: Sounds great. In that vein, we continue the discussion with Vincent de Montalivet. Vincent leads Capgemini's global sustainable AI practice. And we're going to discuss the yin and the yang of the relationship between AI and our environment. Subscribe now to Pondering AI so you don't miss it.

[MUSIC PLAYING]