Pondering AI

Afua Bruce explains that public interest tech is about solving complicated problems with real impact for real people, not just fuzzy feelings and philanthropy.      

Afua and Kimberly discuss misconceptions about Public Interest Tech (PIT); PIT beyond philanthropy; why tech for good isn’t always; purposeful productivity; “solving” non-profits; tech funding traps; PIT design principles; cross-sector career paths; participatory (vacation) design; the messy middle; focusing on impact; responsible investment; and knowing we still have time. 

Afua Bruce is the founder and CEO of ANB Advisory Group. An author and leading public interest technologist, Afua works with philanthropic institutions, tech companies, and nonprofits to develop and use responsible technology. 

Additional Resources:
A transcript of this episode is here.

Creators and Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
Afua Bruce
Founder & CEO ANB Advisory Group

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.

In this episode, we're so pleased to be pondering Public Interest Tech and social impact with Afua Bruce. Afua is the founder and CEO of ANB Advisory Group and the author of The Tech That Comes Next. Welcome to the show, Afua.

AFUA BRUCE: Thank you so much for having me. I'm excited for our conversation today.

KIMBERLY NEVALA: Oh, me too. Now, you've had an absolutely fascinating career and you've worked across a span of organizations in both the private and public sectors. Starting with IBM, over to the FBI and the White House, to organizations such as New Republic and DataKind. And no doubt all of those have influenced your current trajectory with ANB Advisory Group. But I'm wondering if there were some particularly seminal or pivotal moments for you along that journey that have contributed to your current mission to enable mission-driven organizations to adopt responsible tech?

AFUA BRUCE: Yeah. Thanks so much for that question. It is hard to pick just one or two seminal experiences from my career because I feel really fortunate in that every organization that I've worked with, every job I've had, has really taught me something that has led me to where I am today and to the type of work that I'm doing.

I think if we go back all the way to where I first started my career as a software engineer at IBM, I co-opted IBM all through undergrad, and then my first job out of college was also there at IBM as a software engineer. And I think, there, I just really learned the discipline of what it means to write a lot of code delivered code, but also what good management looks like, what solid strategy looks like, what it means to work between the engineering team and technical sales team and release managers. I really had a lot of exposure to the different parts of the business while I was there that I think has really informed my commitment to focusing on mission, focusing on product, and really the cross-functional nature that is required in order to do that well.

KIMBERLY NEVALA: Yeah. And so I don't know if you anticipated back in the day that you'd be working in this particular part or this sector, but certainly that cross-functional view, that multidimensional view, is particularly important when we're talking about public interest tech and social impact organizations.

AFUA BRUCE: That's true.

KIMBERLY NEVALA: So before we dive into that, would you be able to give us just a baseline definition for folks who may have an inkling of what they think public interest tech even is? And maybe highlight some of the misconceptions or misperceptions that folks may commonly have.

AFUA BRUCE: Absolutely. So I think one of the things about public interest tech is that if you talk to 10 different people, you'll probably get 10 to 15 different definitions of what public interest technology is.

I think it really boils down to what is the intersection of technology, people, and policy look like? And how are you solving for those things? It's very intentional about combining the technical and non-technical expertise and really centering people, centering justice in the solutions that are developed in the work and the products that are delivered.

So misconceptions that I think people have about public interest technology-- I think, one, is that if you are working in the public interest you're not working on challenging problems. Or you're not working on technically complex problems. I found that to be the opposite of reality in my experience.

Solving problems like how do you build a system that anyone can access, regardless of what their access to broadband looks like, what time of day they need to get on, where they're accessing it from, and make sure that it is reliable and up all the time is a really complicated problem. Looking to see how you are developing a product that maybe is helping provide tailored education to students in a way that is responsible, in a way that protects privacy, in a way that affirms dignity of the people that you're working with and really does secure the data of those students is a really hard problem. So I think the first misconception is that these are easy problems. Or sort of if we get around to it, if you need a break from the real world work, go into public interest technology.

I think the second misconception that I would point to is that all of the work is technical or all of the work is non-technical. It really is a combination of technical and non-technical expertise. I've been in rooms where people say, oh, for public interest technology, we really need to figure out this tech. And once we figure out the tech, everything will be unlocked; or on the flip side of, like, oh, we just need to focus on policy, and everything will be unlocked. It's really a combination of technology and non-technical solutions together in order to come up with what's required.

KIMBERLY NEVALA: So as you were speaking there, it struck me that there are lessons to be learned because all of those misconceptions we also see those often in the private sector as well as the public sector. Although the impact or the implications may be, the actual impact may be, felt more strongly when we're dealing with organizations working in the nonprofit sector in particular.

AFUA BRUCE: Just sorry, just to touch on that, I think what you said is just so true. We often, myself included, talk about social impact organizations. But the reality is, if any, anyone who is building a technical product that has a lot of users, is having an impact and is having an impact on society. It's a question of how do we define that impact? And what is that impact we actually want to see?

KIMBERLY NEVALA: Yeah, I love that. I love underscoring the fact that public interest does not mean a nonprofit or philanthropy - although certainly that's there.
One of the things that was interesting in reading the book, and it really brought to the fore of my mind, was how we think about then funding and enabling the technology. So I do want to talk a little bit more about that.

But there are also narratives in the broader tech ecosystem today that I have to imagine pose a weightier challenge - or have a weightier import - when we're dealing with philanthropic or social impact organizations. One of those being the tech for good. So the fact that when we're trying to solve some of these problems, that would fall into this umbrella broadly - any of those 15 definitions - that these are always tech for good projects. And tech for good is always good. So you can talk a little bit about your perspectives on that terminology and where it may be good and where it may not be.

AFUA BRUCE: Absolutely. And I have in the past and will probably in the future use tech for good as a shortcut to talk about technology that has a positive, that explicitly has a positive social impact. That the goals of both the technology and the organization or the environment are for positive social impact.

I think when people say tech for good broadly, what is challenging about that, though, can be that sometimes we assume that if we are developing technology for an organization that has a mission statement that it's given itself, or perhaps the general public has implied is for good, that the technology then is inherently good. And I'd encourage people to push back against that assumption.

And that's because an organization that might have as a goal to provide counseling services at scale, to feed hungry people, to house people who need access to stable and affordable and secure housing. Technology built for those organizations may or may not follow secure by design principles, may or may not follow, privacy by design principles and the like. So we want to make sure that when we say tech for good it's not just technology for organizations that are doing for good. But it is really technology that is designed in and along the lines of these public interest tech principles. These responsible design principles that, again, do prioritize positive impact, do prioritize security and safety.

KIMBERLY NEVALA: Yeah, and this brings to mind this morning I saw a post from Alison Taylor, who had written the book Higher Ground a couple years back. A year or two back? I'm losing track of time these days. And she had pointed to this proposition that tech will just solve the ills. And you alluded to this earlier when you said, well, if we just get tech, it'll unlock it, or if we just get policy. And she said, the problem is that we don't understand that most problems aren't tech-shaped problems. They're human-shaped problems.

And that, for me, reflected something that I think I've seen in your writing when you said tech doesn't inherently solve problems, and it doesn't inherently act with humanity. And again, in the space that we're talking about here, it seems like not acknowledging and then designing for that fact could have really detrimental impacts.

AFUA BRUCE: Yeah, that is absolutely true. I did an interview several years ago, and someone asked about technology and what's it like swimming down the river of technology?

And the response - I think I did this interview with Amy Sample Ward, who I co-wrote The Tech That Comes Next with - and our response was some version of there's no naturally occurring river of technology where you just down to the river and scoop up an algorithm. Technology exists because humans exist and therefore humans have the power to think about what problems we design, how we roll out solutions to that, and then what is included in what's prioritized, and what is solved for, what is not solved for, what is put into the technology, what is not put into the technology. Those are all human decisions and human constraints that are put into those situations.

So yes, technology is inherently human, even though sometimes we like to say we are divorcing technology from the human experience. We have technology because we have humans.

KIMBERLY NEVALA: Yeah. And the old trope, I suppose, used to be that what you measure matters. And certainly what we lean into matters. And another narrative, when I was thinking about this particular conversation today, that strikes me as maybe even antithetical to the core of a lot of these mission-driven organizations is around productivity.

Now, I think we could argue, with some data at this point, that productivity is not - or productivity as the ultimate goal - is not serving even public not-for-profit organizations well, either. But is the productivity narrative particularly problematic in this space? And if so, why?

AFUA BRUCE: Yeah. I actually don't know that the productivity narrative is inherently problematic in this space. I would say anything that deviates from what is the ultimate impact that we're having in the world and in our defined mission space, anything that deviates from that is what becomes problematic.

Throughout my life, I have volunteered with a number of organizations and a number of different capacities. And I remember several years ago volunteering for an organization that did literacy in homeless shelters and domestic violence shelters. Organization isn't important, but what is important is that I was on a call with my co-site leader. And we are scheduling things. And I am watching this man struggle with Excel spreadsheets. And we need to total up a line. And he's going there and clicking box or cell by cell by cell. And I said, this can be done a little bit quicker. And he said, I'm doing it. And I said, if you could just give me control, I could do this in a minute, as opposed to the 15 minutes I've been watching you at 9:00 PM at night to get this done. And his response was, this work is a labor of love and we should just accept that.

And I do think that sometimes, in the social impact sector, in social impact organizations, we can get into this trap of it's a labor of love. We just have to do what, we have to do things this way, because this is how we know that we have done them in the past and we want to focus on the labor of love. But I think creating space for what innovations are there, what ways we have to increase productivity in that case will actually help us to get our work done faster and then deliver better quality services to more people, which is ultimately the goal.
Now, at the same time, I don't want people to overcorrect. I have also fielded calls over the past year where some very well-meaning person calls and says, I would like to give a lot of money for AI to solve the nonprofit sector. I'm like what type of AI? Who have you talked to? What about nonprofits? And just AI can solve nonprofits. And so we go back and forth. I don't want people to overcorrect on that and just think technology is everything. But somewhere in the center of those two extremes is where the truth and, I think, the responsible action lies.

KIMBERLY NEVALA: Yeah. And do we then need to shift our thinking to about when someone says, I just want to give you money? Or maybe I just want to give you this. I'm going to donate some software. I'm going to donate an AI system that we've already developed elsewhere so that you can use it to front your mission. And please go make use of that.

And I would imagine that, in some cases off the shelf, I mean learning how to use Excel well doesn't require a whole lot of customization. And I sit on a - I think I've shared this with you before - I sit on a board of a very small nonprofit. And we have this odd pressure, and I think sometimes it's self-inflicted, to not spend a lot of money operating the business. Which means that we are trying to operate on a shoestring budget. So we can say, when you give us donations, a very small percentage of what you give us is used to administer the function.

But when it comes to actually developing good tech in these areas that, as you noted earlier, are actually very complex; these are harder problems than we may see in other commercial areas. It seems that that could really work against us. So is that something you've seen as well? And what do you advise in that case?

AFUA BRUCE: Yeah, absolutely. So I am very aware of what the funding situation looks like for nonprofits across the board right now. And I don't know of any nonprofit, that if someone said I would like to give you money is going to say, actually, we're good. So I want to recognize that the environment folks are operating in.

At the same time, you talked about sometimes people say, oh, I want to give you this product or try this product. We'll help you implement your solution, and then you can run with it. One of the things Amy and I talk about in the tech that comes next is this trap that then organizations find themselves in, where maybe they have taken a product from company A and taken a product from Company B, taken a product from company C. Well, one, the implementation was a reasonable cost because that was comped or given at a lower rate. But then in order to maintain it, the cost have gone up. And so that is a trap for organizations because now they have to figure out how to fund the maintenance cost at a much higher rate than they did initially.

Another challenge is taking from company A and company B and company C. Do those products all work together in your organization's tech sec? Or now, are you having to build additional technology to make product A talk with product B and product B talk with product C? And what it looks like to do updates to maintain a good cybersecurity position, and good maintenance structure for your overall tech stack? So I think that's another thing organizations should consider when adopting a number of different products from different companies.

And then I think the last thing that you touched on, the last thing you touched on, is just what else organizations should be looking for when they are thinking about evaluating what to accept and what not to accept and how that ties to their bottom line? And to the ever-important percentage of funds that go to programs, that go to people, as opposed to operations. I really think the nonprofit sector needs to continue to push back on this, I'd say, extreme misunderstanding of how budgets work. That if you are spending money on technology, for example, salaries, for example, that that is not going to people. If you look at any private sector company, and you look at the cost of the budget for their tech expenses, it is more than the 10% or whatever overhead target that some organizations may have.

Also it is 2026. Caring about people is caring about their data. It is caring about the technical experience, the user experience they have when they interact with your products. And in order to do that, you have to spend money on technology. Also people, again, because it is 2026, people expect a certain level of service and a certain quality of experience when they interact with technology. And if they can go to a number of other products and a number of other places for the rest of their lives and have a pretty easy-to-use interface and then, to access a basic service, to access additional support, it now requires really navigating through clunky systems and clunky services. That is not an experience that is going to keep people coming back for that service, and it's also not an experience that donors will want to see that they're funding.

And so I think the nonprofit sector - and there are many, many people and many organizations who've done a lot of work over the past decade or two to really push back on this. But I think, overall, we need to continue to push back and really own that in order to deliver good quality services, even if you are - especially if you're trying to increase productivity, regardless of how you feel about that being an appropriate metric for the space - you need to invest in technology. You need to invest in systems, and you need to invest in people.

KIMBERLY NEVALA: So are there some recent examples where organizations in this space have made those mindful, sometimes large investments, not just to acquiring the solution initially, but to actually being able to evaluate it, modify it, maintain it over time? That have, again, really lifted their mission up in a way that would help us start to really chip away or have people thinking differently about what it means to invest or to donate to philanthropic, social impact, or public sector initiatives?

AFUA BRUCE: Yeah, absolutely. I am first going to talk about an organization that I talk about frequently, just because I'm really excited about the work they do.

I've gotten the chance to work with Quill.org, which is an educational tech nonprofit company, and they develop software that supports learners. A year or two years ago, sometime in the recent past, to your point that, well, our grasp of time for all of us is slipping. But sometime in the recent past, they wanted to switch their product, which already used traditional AI to using Gen AI.

And so, through generous funding from a number of big philanthropies, they were able to invest in that. And so they were able to use their teacher council to create a lot of training data, now powered by the actual teachers. So that means the training data that powers their Gen AI solution is from real teachers and real experiences, as opposed to training on the internet as a whole. So you have just more realistic and higher quality responses through that.

They were also able to invest a lot in testing and a lot in the evaluation. Really making sure that before they rolled out the product to their clients broadly, they had time to do enough testing. Testing internally with their own systems, manual testing, automated testing, also a lot of testing with actual teachers and actual customers before rolling it out. And so it really required a lot of work.

Developing your own GenAI product from scratch, from the training data, requires a lot of time, requires a lot of money, and requires a lot of expertise. And doing it in a way that responsible requires a lot of intentionally managing the population in which you're working with, your customer base, essentially, and proactive relationships with them. But they were able to do that and have now rolled out a product that's having great success metrics.

I think another thing to highlight, just because you asked about evaluation. I sit on an advisory council for an organization Humane Intelligence on the nonprofit side - there's also a for-profit company - and that's what they do. They have lots of very smart, technical people who are building systems to do evaluation of Gen AI tools. And so I think that is something else that we can look at. Is how are we saying we want to abide by these principles high level. But then what does it look like to actually evaluate these organizations, evaluate the technology across all sectors, but anything focused on impact? What does it look like to do that? I think Humane Intelligence is an organization that's really doing good work in that space.

KIMBERLY NEVALA: It is interesting, too, because as we think about this, and you alluded to this right up front, where people tend to think that perhaps if we're going to work in any of these spaces and do tech for good that it's a side project. Or it's I'm going to take a break from the real work of the real world, which, even as I say that I think the obvious irony or problem with that statement becomes clear. But again, this dealing with all of that intersection between the populations, between the developers, between the policy impacts, again, really come, I think, to a fine point in these particular types of applications.

So how do we shift that narrative that says this is something you go to early in your career to get your feet wet versus this is something that actually requires all the experience. So not only should we value that experience, but we should, as folks who have that experience, should be looking to deploy it here. If you're looking for a real challenge, this is where you will find it.

AFUA BRUCE: I think that's exactly what we say. I think we say exactly that. If you're looking for a real challenge, this is where you find real challenges. That is not to the exclusion of real challenges existing elsewhere. This is a complicated world. There are a lot of problems to solve. You pick your issue. You find an organization, for-profit, nonprofit that allows you to do that.

If I were redesigning the world, I would do it in such a way that, especially for career paths, people have more fluidity between sectors. And so understanding that you can spend a few years in the private sector, then you can hop over to government. You can hop to the nonprofit space and go back to the private sector. So it's not just, I started my career here to get my feet wet. and then I went to my real job. I did a job here, and now I went to a second job here. Or you find a sector that you really like, and you stay in it and really dedicate your career to that.

But I think that talking about some of the complex solutions alongside other solutions, which are also maybe less technically complex but have as much impact to organizations. Talk about those alongside each other. I know in the examples I gave, I gave the bigger ones that were technically creating and evaluating Gen AI systems. But there are a lot of organizations over the past year or two who have made significant impact by updating their CRM tools. By restructuring platforms that connect volunteers to staff to actual and community members so that resources are allocated in the most efficient way and most timely way, so the community members can receive those services in the most timely manner. There's a wide variety of solutions that it takes in order to help improve the state of the world that we have.

KIMBERLY NEVALA: And then making sure that we're designing solutions that actually work for the populations we are trying to serve. We talk a lot, when we think about adaptive governance or adopting, changing the ways that we have approached things like AI and data governance in the past. And this idea of participatory design, participatory governance, comes up a lot. I don't know that I see a lot of examples of that done really, really well in the private sector, I suppose, and in the commercial sector. Although we certainly see folks talking about it: getting the input from your consumers, from your users.

But it doesn't seem that this would be optional in the space that we're talking about here. So what is participatory design or what is its relevance? And what is the requirement in the space that we're talking about here?

AFUA BRUCE: Yeah, absolutely. When I think of participatory design, I really think of strong public interest type principles being put at play, which is if you were designing for someone, you should be designing with them and thinking about that.

My family goes on family vacations once a year. I own the schedule because I choose to. I love scheduling, I love a good Excel spreadsheet. And I have in the past, I will say. I'm stuttering already because I'm imagining my sister listening to this, but I have in the past, for example, taken a look at everything that is going to be in the area we're going. And I create a beautiful, just immaculate, spreadsheet as to what all we're going to do. I line up tickets. I'm like, here's where we need to be here. And then we get on the ground, And I'm occasionally shocked that people don't just want to follow my well-researched and well-thought-out and well-planned vacation. There's a spreadsheet. Just follow the spreadsheet that I did myself because I know you all. I've known you for my whole lives, for your entire lives. I researched the place we're going and I built something that everyone should enjoy. And yet we're on the ground, and now people are complaining.

Bit facetious, but I think people do the same thing with technology, right? I have heard of this population over here. I have heard that people over there may need help with housing, may need to access food. And I know technology. I have done some great ChatGPT Claude perplexity, whatever, searching on what that population needs. So now I have developed a tool that is going to be great and I'm just going to send it out into the world; and then are shocked when no one wants it, or when there's just a lot of friction.

And so back to your question then about participatory design. We really want to get to the point where we are starting these conversations early with the communities that are going to be impacted by the technology and really shift from a designing for to a designing with mentality. And in the book, we give an example of an organization called rescuing leftover cuisine. Which, when it started was a very, very human process of people walking to a restaurant, walking to wherever that had extra food at the end of the day and then walking or biking it to a neighbor facing food insecurity.
And then as they look to expand, they partnered with a consulting company that could build a product.

But they didn't just partner like consulting company design your own thing. Or even consulting company talk to one or two people at the nonprofit to decide it. They had a system where they had a cross-functional, like cross-sector, representation of everyone who was going to be touched by it. So volunteers, community members, restaurant owners could put in their request, see where their requests, where their requirements were in the process of getting adopted or not adopted. And that was transparently communicated and explained, and people could track that way. And that was one way to go about a more inclusive design process.

KIMBERLY NEVALA: And I was chuckling as you were talking there because I was thinking I think you'd be a fabulous person to travel with. Or just to have the opportunity to pal around with and talk. But I think I would be very frustrating for you, I think, because I like a bit of a plan and I like to ignore it and categorically when the mood strikes.

AFUA BRUCE: Well, that first bit of what you said that I would be a fabulous person to travel to, I'm clipping that and sending it to my sisters with the next spreadsheet.
KIMBERLY NEVALA: [LAUGHS].

AFUA BRUCE: For the record, I do now ask for input before putting together the spreadsheet, so they get their request.

KIMBERLY NEVALA: Well, fair enough, but I appreciate that you like structure and are willing to invest in making it so. Because that is the other thing I will tell folks is you are not allowed to say you do not have an opinion only to come up with an opinion later. If you don't have an opinion now, you are not allowed to opine or complain later. And I try to apply that to myself as well - but maybe not always that well.

But yeah, even in that situation, I think these situations that we're dealing with are human situations. They are fraught. They are messy. You recently posted, and we'll link to this as well, a Ted Talk from Dr. Nakalembe - and I apologize that I may have very badly mispronounced her name there - talking about drought prediction and how drought prediction doesn't actually solve the problem of drought.

And it's such a good encapsulation of a lot of what we've talked about throughout here. Which is, it's not enough to have the tech that tells you when a drought is going to happen if we're not actually applying or leveraging the tech to solve the problem or avoid it at all. And so it's a great talk, we'll definitely link to it, and I was really impressed with it.

But again, when we're thinking about that messy middle, and, particularly, again, for organizations in this space where we feel like we just need to be getting on with it. Sometimes there's a pressure to do it perfectly. What are some of the things and mindsets that we need to adopt - that I think also apply to organizations broadly, regardless of the sector that you're in - so that we can actually work through these issues and be active in that messy middle? Are there some shifts in perspective that we need to make or things that we need to be willing to do that sometimes feel a little complex, feel a little uncomfortable?

AFUA BRUCE: Yeah, absolutely. And yes, Dr. Catherine Nakalembe out of University of Maryland. Her Ted Talk is absolutely fabulous. Her work, her body of work, is just really impressive and very impactful.

She talks about that messy middle and talks about how, yes, her work includes using a lot of different AI algorithms and complex, different data sets to do drought prediction, other prediction, across mostly Eastern Africa. And so one of the things she talks about in her research broadly and in this Ted Talk, is this messy middle of how we can continue to push for higher and higher and higher and more and more accurate algorithms. But if we haven't figured out how to translate what the output of the recommendation is to something on the ground, what have we done? She has a line in that Ted Talk that is something along the lines of: I’ve never seen a drought prediction that puts water pumps into the ground. And so the point is, if we've continued to push to the ends, into the extremes, of our technology, but we're not looking at that interaction then we've only solved maybe half the problem.

And so in shifting the mindset, I would say that we do need to continue to look at what our internal system metrics are but we do need to make sure that we are also looking at that end metric. It's not just how efficient is this algorithm running? Or how quickly do I respond back to email inquiries about mental health services that we're providing? But what number of people are actually getting services? And what number of people are actually reporting improvement afterward? So really making sure that as we think about metrics and measuring, which you touched on earlier, that we're measuring for true impact, as well as the things along the way. So I think that's a little bit of a shift in our organizations.

I think another - and this is especially for people who are working on the technical side - is to really make sure that there's strong integration between the technical personnel and the programmatic personnel. I see a lot of organizations that, even with a strong tech staff, even with a strong IT team, which I know is a luxury for nonprofits anyway. Often, it is half of one person's job. Definitely should not be. It's not my recommendation. I'm just saying I recognize the reality of some organizations. But even with that, it's like, our tech decisions are over here, and our programmatic decisions are over there.

It's 2026. These two things have to be integrated. And so I think our organizations need to work harder to make sure that, as we develop organizational strategy, we are developing our tech strategy alongside that. There's a clear interaction between tech strategy and organizational and programmatic strategy. So I think that's another shift.

And then I think the third shift is to make sure that we think about talking about our work, talking about the process as well as the impact. A lot of organizations struggle with this because, again, I recognize the reality of organizations don't have time or money for this. But I think really highlighting the work that went into it, the participatory design that often goes into the work alongside what the actual impact is, is really strong. Both for owning the success that organizations have but also inspiring other organizations and providing guidance to other organizations as well. And then also to reiterate the complexity and the achievements of the nonprofit space.

KIMBERLY NEVALA: And I have to give you a shout out here, by the way, your write up and takeaways about Dr. Nakalembe's talk was just gorgeous in that it left me a lot to think about and it got me - I clicked over to the talk - and I still took things away from the talk itself. And so that is somewhat unusual. It wasn't a transcript of the talk, which I thought was fantastic.

And I like this idea of having, and you said this to me before, the confidence to share. As a perpetually recovering perfectionist and then someone who works in this space as well, I can imagine that, again, there's a hesitation to admit we tried this, it didn't work. Or hey, this is really hard and we're not getting it right when some social service or some very weighty, high-concept value is on the line. And so this idea you brought out there about reliability over perfection. And just being able to share those experiences seems like something that we need, not just in the context of AI, although we certainly need it there, but just more broadly in this space. Did I take that away properly?

AFUA BRUCE: Yes. You absolutely took it away properly, and I encourage people to share their work, to share what they've learned. I love to highlight, you've mentioned other people's work, when it really inspires me. And also, frankly, my list of things to share on LinkedIn grows exponentially every week. And so I think that's true.

I would also say to the other perfectionists and recovering perfectionists out there that I'm not saying that you have to say the same thing to every audience in every space. I think also building the muscle in maybe smaller spaces where you can get into more detail. Maybe they're being a bit more candid as to we tried this, it didn't work. We tried this, it didn't work. We tried this. We gave up. Six months later or a year later, we came back. We tried it again - magic!

But I think also recognizing that you don't have to share the same thing in every space to every size audience. But I think we can learn from other people's both failures and trials along the way. And so sharing that so people can learn from what you've done, I think, is really important.

KIMBERLY NEVALA: And so for all organizations, again, of any stripe within the social impact or the non-profit space - this applies, I think, for-profit companies as well - you've said that talking about responsible AI, talking about responsible tech is one thing. But institutions both show their commitment but also shape responsible AI through their investment.

And you highlighted three areas, which were teams, infrastructure, and narrative. And I'm wondering how you came to those as the three legs of the stool? It's not the traditional people process technology, although it's a spin on it, which I appreciated. And then, of those three, is there something that we tend to pay less attention to in ways that are then detrimental to our mission?

AFUA BRUCE: Yeah, absolutely. And I'll confess to you that it is a spin and upgrade on people, process, technology, which I talk about all the time. Because teams are people, the process and technology combined for infrastructure, and then this last piece of narrative. Which I'm finding is more and more important in my own work and especially as I work across organizations and finding that narrative. And so that's really how I came to those three areas.

We get work done through people. I know there are lots of headlines about how we don't need people anymore to then serve people. But we get work done through people: the people in our organizations, the contractors, and, most importantly, the community members who we are serving. So really investing in these teams that have the skills to build the right relationships to foster the right insights and processes, insights and direction, really important.

The second piece, infrastructure. This is the technical infrastructure that's required. It can be a broader sector-wide infrastructure, whether that's a technical background, or whether it's just a channel to share funding opportunities too. But that infrastructure then organizations can then build on top of to do their own customizations for their particular issue areas, their particular missions, their particular communities they serve.

And then, finally, this narrative piece. I think, especially when we look at Gen AI right now most of the popular and most common narratives are for-profit and for very distinctive, yes, for-profit goals, as opposed to more human-centered goals. I think that there is a lot of great work being done in the nonprofit sector around Gen AI. Sometimes, it's pushing back against places to not use Gen AI .

But even more broadly than Gen AI, there's a lot of great work being done in the social impact space. And so we should own that narrative and also offer it up as a contrast to the very narrowly defined for-profit narratives that have become really popular. I think that's true for, yes, definitely for Gen AI. I think it's even true for how organizations are structured and function. If you want to have real-world impact, you have to go to one of these five companies, or you can start your own enterprise. You can do this and that. So I think owning that narrative is really important.

KIMBERLY NEVALA: Do you think if we are proclaiming those narratives more loudly, more broadly, and telling these really great stories, that it helps us to address-- I mean, certainly the market incentives today favor scale, scale in terms of users, versus scale in terms of impact. But would leaning into this narrative help to bend even the of venture capitalist piece towards seeing the value in “investing” - I use air quotes there - in philanthropic pursuits?

AFUA BRUCE: Yeah. I would hope that it would help to shape that narrative or to shape some - I don't know that venture capital would be the way that I would go - but to shape investing in, investing in nonprofits, investing in social enterprises more broadly.

I think there's often a perception that the social impact space is very, very small anyway. Or people do not have purchasing power. That it is all about warm, fuzzy feelings and not about real impact. And so I think that if we can start to shift some of our narrative, that really does get to there is true impact. There's quantifiable impact. There is more than warm, fuzzy feelings. That happens in this space. And there are a lot of people who are affected by this. I think these are some of the metrics that some investors look for.

And I think the other challenge is there are a lot of different models on double bottom line, triple bottom line, others. There's a number of people who have been doing great work in this space. Yes, there's some folks out of MIT and Cornell Tech and University of Michigan, I think, are really leading in this space in some of this research. So I will defer to them for more specific models. But there's a lot of, I think, good research there.

But I think part of it is talking about these things in some different ways. In getting that narrative out there.

KIMBERLY NEVALA: Yes. I have a secret, rosy-colored glass hope that someday we actually could, that venture capital would, be interested in this space because we are able to show, or they just understand, that there is actually economic benefit of doing this as well. There's very real human benefit. But as you say, those two things are closely intertwined.

So with all of that being said, and there's so much we could be probing with you, as maybe indicated by my weaving all over the place today, but are there any final words, reflections, encouragements, points to ponder that you'd like to leave with the audience today from your work in the space?

AFUA BRUCE: Yes, there are. We talked about so many great things today. I'm trying to think what is an underlying theme, or what's the one thing that I want to underscore for your listeners? And it might be that we still have time. [CHUCKLES]

We still have time to change and to be proud of narratives. We still have time to make different decisions, to make more inclusive decisions about technology. We still have time to improve how our organizations function. But also, we still have time to deliver true positive impact on the communities we serve.

I think sometimes the framing in the world, especially with all the changes over the past year or so, is that we're out of time. This work isn't possible. The window for when we could have an impact is shrinking. We still have time. We still have energy. We have to lean into that. And we can figure out how to get some great work done.

KIMBERLY NEVALA: Well, that is an excellent and hopeful call to action to end on. So thank you so much. I really appreciated your time and patience with all of my questions about all of the things.
[MUSIC PLAYING]

AFUA BRUCE: It was a great conversation. Thanks so much for having me.

KIMBERLY NEVALA: Anytime, absolutely. And to continue learning from thinkers, doers, and advocates such as Afua, you can find us wherever you listen to podcasts, and we're also on YouTube.