Welcome to CharityVillage Connects – a series that highlights topics vital to the nonprofit sector in Canada. CharityVillage is a resource to over 170,000 charitable and nonprofit organizations in Canada. This series, hosted by President Mary Barroll, will provide in-depth conversations with experts in the nonprofit sector. We’ll examine diversity, equity and inclusion, innovations in fundraising, the gap in female representation in leadership and many other subjects crucial to the growth and development of charities throughout Canada.
AI for Impact: How Nonprofits Are Turning Tech into Social Good
SFX: Sounds of typing and office ambiance
Mary Barroll: Welcome to CharityVillage Connects. I’m your host, Mary Barroll.
SFX: Hummingbird flying and tone
That’s the sound of a Hummingbird pollinating our world and making it a better place.
Music
The Hummingbird is CharityVillage’s logo because we strive – like the industrious Hummingbird – to make connections across the nonprofit sector and help make positive change.
We’ll offer insight that will help you make sense of your life as a nonprofit professional, make connections to help navigate challenges, and support your organization to deliver on its mission.
Mary Barroll: In this episode of CharityVillage Connects we're exploring practical ways and real-world examples of how Artificial Intelligence can be used by nonprofit organizations to streamline workflow and add efficiencies, freeing up time to focus on more important, impactful work. We’ll learn from the Executive Lead for RAISE, Canada’s first national program to train more than 500 nonprofit workers in AI, and we’ll speak to the leaders of two of the pilot project’s organizations: CAMH Foundation and the Furniture Bank. Through these conversations, we’ll attempt to answer the question, how can we responsibly and effectively harness the power of AI for social good?
SFX: News buzz
News clip “Open AI just launched Chat GPT Agent. Like Chat GPT and other chat bots you may have tried, you can use it to do research, summarize information. But it's also powered by something called agentic AI, an artificial intelligence agent. And that marks a big change in how you use the web. So, how does it work and can you trust it?
News clip “The Canadian Anti- Fraud Center says Canadians lost $310 million to investment scams last year. Most frauds involved altered video using AI to make it appear someone is saying to invest in a platform which is completely fake.”
News clip “The job market in Canada is facing some transformational change, including, of course, the arrival of artificial intelligence. So, is AI here to take your job? Here's what you need to know.”
News clip “The world is pulsing with a powerful new technology. AI has moved from fantasy to fact. Two thirds of people around the world expect to use AI within a year. Areas like health education and work are being transformed. And the revolution has only just begun. So how should we deal with this crucial issue? How can we make sure that countries and people benefit from the AI revolution?”
Music
Dan Kershaw: There's lots of people who want to throw rocks at AI. But we, as a sector, aren't putting the genie back in the bottle. It's here. It's not changing. And even if AI doesn't get any better, it will radically change our sector. And I, for one, think we as a sector should engage with it.”
Anne-Marie Newton: Everything I understand about AI, first and foremost, is that it's like electricity. It's coming whether we want it to or not. And so, to avoid it or to think that we can defer until everyone has adopted it feels a little bit like not the best strategy.
Elena Yunusov: I really think we're leaders in adoption. It's an exciting place to be in. We have leaders thoughtfully approach AI adoption from a first principle of augmenting and not replacing human-in-the-loop and human intelligence and agency and contributions, which is so important for the sector, not to lose that human connection that we have.
Tim Lockie: I think that it starts with a focus on human agency, not with efficiency. I feel like the most important part of keeping AI in a human framework is to think of it as an agency for humans’ idea or technology or even more of a nervous system problem rather than an efficiency problem.
Jason Shim: I remember, many years ago, when folks were starting to get online, that a general thing to keep in mind was don't share anything on the internet that you wouldn't want posted on a bulletin board. The considerations around sharing private information, one of the things to consider is that if you're not paying to use the product, you may be the product itself.
Jessica Vestergaard: I think it's really important to get AI into the hands of the good guys. AI is already being used in so many different ways, whether we like it or not, it's here to stay. And the bad guys are already out there using it for things like scams and fraud and deep fakes and all kinds of nasty stuff. And nonprofits really have the opportunity here to get out there and use AI as a force for good.
Mary Barroll: Canadian nonprofits are often slow to embrace new technology. But will they be left behind as AI rapidly revolutionizes our world? In this episode of CharityVillage Connects, we're exploring this urgent question: Is there a way for nonprofits to responsibly and effectively harness the immense power of AI for social good? Can they streamline workflows, boost efficiencies, and crucially reallocate precious human energy from those repetitive admin tasks, to the stuff that really matters, the more impactful human-centered work that they do? In this episode, we speak to a few innovators in the sector about the responsible adoption of artificial intelligence by nonprofits. To help us look for a clear path forward, they share their insights about how to take first steps and bravely embrace AI.
SFX: Digital sounds
According to the Digital Journal, more than 170,000 nonprofit organizations support Canadians every day, yet fewer than 5% currently use artificial intelligence. The RAISE initiative aims to close that gap. Designed specifically for nonprofits, RAISE or Responsible AI Adoption for Social Impact helps organizations adopt and govern AI responsibly, with an emphasis on long-term capacity, ethical use and trust. RAISE is being delivered through a partnership between the Human Feedback Foundation, The Dais, at Toronto Metropolitan University, and Creative Destruction Lab, with co-investment from DIGITAL, Canada’s global innovation cluster for digital technologies.
Elena Yunusov is the Founder and Executive Director for the Human Feedback Foundation, fostering a more open, human-centered, and transparent future for AI. I asked her to tell us what inspired the RAISE initiative and why programs like this are so important.
Elena Yunusov: I think it's a really significant moment we're in. It really, truly is a threshold moment. The funding is shrinking. Philanthropy is shrinking. Government funding, I believe in the next budget, will be also pulled back. And so, we are really looking at the crisis here, because ever since the pandemic, every organization that we've talked to told us that their demands on their services are way up, but their staff is burnt out. There's churn, there's skills and change management and all of these barriers. And yet without using AI, I feel that we are missing a massive opportunity to do good with technology that is fundamentally really well suited for, for nonprofits.
What inspired this initiative is us looking for leverage points to see where human-centered AI would come from, since that's the mission of Human Feedback. And so, it's come from consultations and many conversations about what can we do, as a sector, to move into the age of AI.
For RAISE to come up with even the proposal, we've worked with the members of our consortium, with Community Foundations of Canada, with UHN, with CHEV and others, to better understand the barriers and the challenges, before we designed what we thought would be a thoughtful pilot, to even see what we can practically accomplish together, to move the adoption dial from left to right, for the sector. So, it's really grounded in actual work and consultation with the sector, and collective thinking about a better future that hopefully we can all help shape.
Mary Barroll: Elena Yunusov details how the RAISE initiative is structured so that large as well as medium to small organizations can benefit from its resources.
Elena Yunusov: We've structured the program for large organizations that have the data to go into the organizational AI adoption track, delivered in collaboration with CDL. And then, for small to medium organizations to benefit from the program, we focused on upskilling, to build that capacity for two types of audiences, non-profit leaders in organizations who could use literacy for responsible AI principles, governance and policies in their organizations, and then frontline staff and delivery, where it's more about using the existing third party tools well, mitigating risk for their organizations, but also being a sophisticated user of the tools; knowing what's available and what to use when and why, and doing so ethically and responsibly. So, grounded in practical skills development.
SFX: Sounds of typing
Mary Barroll: Here are some real-world use cases Elena Yunusov is seeing, where nonprofits are already using AI successfully.
Elena Yunusov: I've searched far and wide for case studies. That's what I really wanted. And I started that search, over a year ago, when this program was just the Google doc and the conversation. So, this isn't a sector that's known for bold leaps into the unknown and doing a lot with tools that are evolving every day. And so globally, especially with RAISE and work that Canadian nonprofits are doing, I really think we're leaders in adoption. It's an exciting place to be in. We have leaders at Kids Help Home and Furniture Bank, leading the way and showing others how it's done and sending ripples throughout the sector. Canadian Cancer Society has done a lot of work internally, to look at these cases and so thoughtfully approach AI adoption, from a first principle of augmenting and not replacing human-in-the-loop and human intelligence and agency and contributions, which is so important for the sector, not to lose that human connection that we have. So central to the mission.
So, those are the types of use cases that I think are great. It's like with Kids Help Phone, the use case there is about when their calls connect over the pandemic. They've worked with Vector Institute to enable them to do triage more effectively and push calls, with suicidal intent, to the volunteers faster. Those types of implementations are extremely amazing because in their cases, I think that one model that they've developed is saving lives today. Those types of use cases are amplifying and augmenting and helping organizations deliver on their really important missions. And I think they're grounded in human-centered principles that we hope to advance and help shape. And I'm excited that Canadian nonprofits globally are looking at AI as a massive amplifier for that type of work.
Mary Barroll: The Centre for Addiction and Mental Health or CAMH Foundation joined the RAISE program to explore responsible AI in their organization. Anne-Marie Newton is President and CEO of the CAMH Foundation, where, in support of Canada's leading mental health hospital, she leads efforts to advance mental health through philanthropy. Here’s what she has to say about embracing AI as a tool to galvanize the Foundation’s work.
Anne-Marie Newton: Everything I understand about AI, first and foremost, is that it's like electricity. It's coming whether we want it to or not. And so, to avoid it or to think that we can defer, until everyone has adopted it, feels a little bit like not the best strategy, if we are thinking of being future-proofed, and really gearing up as an organization to continue growing. So, this idea of embracing new technology really aligns with what the hospital has made its mark in.
Mary Barroll: Anne-Marie Newton tells me that even before joining RAISE, the CAMH Foundation was already experimenting with how they might implement AI.
Anne-Marie Newton: We did some test groups, thought of teams where AI could be piloted among early adopters. So, our marketing team actually saw some real use cases in drafting emails or having the right tone around communications, where AI could be helpful. So, they actually did a team session with an external consultant to pinpoint where those use cases might be best deployed and implemented, in a very limited scope. And that was really helpful in us defining, beyond that, how we wanted to craft our AI policy, how we wanted to think of deployment on an enterprise level. And so, what we really are thinking now is, we can see some specific communications and process lanes where AI is going to be really helpful. And now, charting a course forward for that. And the RAISE program offers us an opportunity to do a real-time project.
Mary Barroll: Dan Kershaw, is the Executive Director at Furniture Bank, a charity that creates homes from empty housing through the gift of furniture. A self-described tech nerd, Dan was an early adopter of AI at the Furniture Bank, when ChatGPT was first exploding across the globe. He shared his adventures in our 2023 AI-focused episode and he's back for this episode to share an overview of how Furniture Bank started using AI for a unique fundraising campaign, a few years ago, and how AI is now used in virtually all operations of the organization.
Dan Kershaw: Back in 2022, before ChatGPT, we self-identified as a social profit. And when we are out doing our fundraising campaigns, we all want to tell our stories and show the work that we do and things like that. And the nature of the work that we do, to take photos of it would be completely unethical.
So, we had 40-plus pages of stories from families that we’ve supported over the years. And this thing called Venture had popped up on my feed, this new thing that would take words and make images. And so, we started to experiment. We were curious on what was possible. So, we were able to take the stories of those families, keep them anonymous, but give their words power.
And we did a fundraising campaign: The picture isn't real but the reality is. And we did that quite successfully, opened up a lot of doors. Financially, it was successful, but it was more profound in terms of showing that the opportunities that come from this new invention were quite profound. Fast forward to today, there isn't a place in Furniture Bank or any social profit, nonprofit charity that AI couldn't create more impact. And right now, the experiments we're doing are literally in every department. Some are more in the experimenting phase. Some are fully in production. We have AI-augmented phone systems. We have AI agents that have been trained on certain things to support us, in the work that we do. The mindset is, we're really looking at the lowest value work that we're doing, to free it up to create more opportunity to do the high value work that we should be doing with our constituents and stakeholders.
Mary Barroll: Dan Kershaw clarifies what he means by having AI agents.
Dan Kershaw: Working with an agent, you essentially are creating a virtual employee. We have some that have been trained around how to write a good social media post.
Not to do the posting but help take the ideas of our staff and then apply it to the best practices that are out there.
And then you can give it to another agent who is trained the way you want it to, to help manipulate and move things along. This is really a form of automation. We've had AI automation for 30 years. Our Netflix is AI. It's making recommendations for us. What's new, in the last three years, is that these agents are going well beyond just prediction. You can have an agent that is trained up to help answer emails or help handle chat or any number of things that are important to your organization.
Mary Barroll: Dan Kershaw describes a pivotal moment for him, that sparked Furniture Bank’s move towards adopting AI tools.
Dan Kershaw: Before AI, 2015, I was speaking to a bunch of grad students about social enterprise models. And as grad students, got all argumentative on just how important is furniture? But there was a young lady who was sitting there who brought the room to silence. In a very soft voice, explained that had Furniture Bank not been there, she would not have been able to keep her family together. She would not be in this grad program.
And when I went back to the office, I went and looked up, what did Sarah get? She got sofas and beds and things like that. But what really struck me was, because the person in my seat had made the decision to invest in technology, to create more capacity, to create more opportunities for families to get more appointments, we had room that Sarah could get her appointment. And for me, this is the parallel of our sector right now. There's lots of people who want to throw rocks at AI. But we, as a sector, aren't putting the genie back in the bottle. It's here. It's not changing. And even if AI doesn't get any better, you will radically change our sector. And I, for one, think we as a sector should engage with it. So, for me, it's always about how can we help 10 more Sarahs, 100,000, 10,000, a million, it doesn't matter. If you're sleeping on floors, you don't have the goods to actually be in a home, like you and I are today. That's our charitable mission. And AI has already helped us be more successful.
Mary Barroll: Advancing nonprofit organizations’ missions and maximizing their impact, using data and technology, is exactly what the Canadian Centre for Nonprofit Digital Resilience, or CCNDR aims to do. Jason Shim serves as CCNDR’s Chief Digital Officer, and I asked him to tell me what patterns or trends he sees emerging about how Canadian nonprofits are currently using artificial intelligence in their work.
Jason Shim: Since we last talked in 2023, we're definitely seeing more organizations that are adopting AI. However, the adoption is still in its early stages. And as of 2024, according to Stats Can, the number was sitting at about 4.8% of nonprofits. And I'm seeing that steadily increase anecdotally, when I give presentations on AI, the number of people who are raising their hands, in the audience, has been steadily increasing. What I am observing is that when it comes to nonprofits that are using AI in their work, it really depends on the organization, as well as the individual roles within the organization. So, I think, over time, more folks are reporting that they are using AI in part because many of the pre-existing tools that people are already using, like Google Workspace or Microsoft Office or Canva, are integrating many AI features into it. So, we're seeing more people who are able to use features like built-in summarization tools or things like the auto transcription functions in the meeting tools, as well as meeting notetaking. For tools like Canva, there are functions that assist with image editing and generation, as well as drafting up marketing materials, for example. And finally, there are instances of Chat GPT, which is the big one that many people are aware of, and people who are using those language models to summarize text or compose drafts or reports.
I think the one major shift has been also the wide availability of these language models. It's available online, it's for free, for the basic accounts. And on the technical side, one theme that I'm seeing emerge is that for some nonprofits, who do have technical staff, things like GitHub, Copilot or Cursor, to generate code, or prototypes for debugging, that is something that has seen tremendous growth, definitely on the for-profit side, and seeing that grow on the nonprofit side, as well, among technical staff. And finally, they're starting to hear more case studies. So, more nonprofits who are weaving it into and exploring how they can use it for program delivery or analyzing data. We are still seeing continued emerging use cases for it, but broadly speaking, seeing a gradual adoption from a trend perspective.
SFX: Sounds of electricity
Mary Barroll: The Human Stack is dedicated to helping nonprofits succeed with their technology. Tim Lockie is its CEO and founder of Now IT Matters. And in 2021, Tim developed and launched Digital Guidance, a methodology designed to move nonprofits from tech-resistant to tech-resilient and transform the nonprofit industry into a human-centered digital space. Tim Lockie, who also co-hosts the podcast, Now IT Matters, believes that digital transformation is affordable and scalable with nonprofits of all sizes.
He describes some use cases of nonprofits using AI, in both practical and impactful ways.
Tim Lockie: There's a community foundation that I've done some work with. They had been trying to create an AI policy for months. They had just gotten stuck on several areas, they didn't know what to do with it. I came in and gave some guidance on that. Their COO took what I had said and made it go way further. It's an incredible policy that they've made because that policy had identified that using Google Gemini was within their existing cybersecurity and data privacy work, they could actually use their data in a safe and ethical way. They saved $25,000, in five minutes, by creating a deep research report on a housing coalition project that they had been working on, that would have taken three weeks by a consulting firm. Got it done in five minutes. It was really high quality, worked really well. And so, their organization has actually saved a lot of other time and work.
Mary Barroll: Jason Shim describes how AI can be used to import dense research and generate data in accessible formats.
Jason Shim: Gemini Canvas gives a capability to import really dense research reports, and you can ask the AI to generate an interactive app or a website, to better understand some of the concepts that are presented. And it has been quite remarkable to see what it comes up with and also to present some interesting visualizations around data that would otherwise be quite dense and locked up in tables within our research report.
The other is around things like classification for public data sets. Some very, very large data sets, that might be tens of thousands of rows, that manually, would take the better part of a couple of days to, you know, slowly go through each one, that a well-trained AI classifier can do the work in the background, thereby freeing up time for folks to do the more advanced analysis or to, you know, take on additional data sets. Mary Barroll: No one can argue against using methods that will free up the use of precious time. Well, nonprofit organizations can also use AI tools to help them carry out repetitive and time-consuming administrative tasks. Jessica Westergaard, founder and owner of Written with Purpose, a consulting company that serves the nonprofit sector, shares her admin AI strategies.
Jessica Vestergaard: Love to use AI for administrative tasks. It's especially helpful when you're managing multiple grant deadlines, as I know a lot of nonprofits are. You're not usually just working on one grant, you're working on multiple grants, and you're managing drafts, and you're getting reviews done by teammates and other departments. You're coordinating with your accounting department and getting their help with budgets and financials and things like that. So, it's a team effort. There's a lot of moving parts and there's a lot of deadlines. I really love to use it for creating things like checklists.
You can download, oftentimes, grant guidelines from a funder's website. Paste it straight into Chat GPT and you can ask it to create a checklist for you so that you don't miss any important pieces. And it will create a checklist; it will include all the documents that you need to gather. It will help you remember deadlines and things like that. And it can help synthesize those grant guidelines, which can often be wordy and complex and difficult to navigate and help it put it into plain language for you, so you're not sitting there trying to decipher what the funder actually means. You can also do things like create work back timelines. So, you can have quick reference points of when deadlines are due. So, those are some of the ways that I use it for project management.
Mary Barroll: Clearly, there are many AI tools that could enable organizations to save time and be more efficient, including the never-ending task of writing grant applications. Jessica Westergaard provides expert, professional grant writing services to her nonprofit clients. She is an early adopter of AI and has developed effective ways to harness AI tools in her grant writing services. She's also partnered with CharityVillage on developing a new e-learning course designed to teach other nonprofit professionals how to use AI in their own grant writing. Here, she explains how she uses Chat GPT to write grant applications.
Jessica Vestergaard: Chat GPT, it's extremely helpful with brainstorming your program activities. Oftentimes funders will ask you for a list of all of the activities that you're going to do from project planning, hiring your team, developing metrics and surveys, your different program activities. And those are things that we had to go through and map out ourselves. And now, Chat GPT can help us brainstorm what would be the activities that we would want to include in that list. It's also really good for synthesizing and summarizing information.
It's also great for editing and reviews. You can paste in a finalized grant application or a first draft. And paste in the grant guidelines of the grant that you're working on. Ask it to review your finalized application and provide advice and suggestions for how to improve your grant narrative to better match the grant guidelines, which is extremely helpful. I've done this myself and it comes back with some really incredible suggestions, things like, don’t forget to add this here. You should increase this section and add more information on this. And it'll give you reasoning, as well, which is really exciting.
When it comes to budgets, of course, we're not ever gonna replace our accounting team. But what it can help you do is create line items, based on the project that you're talking about. So, if in your grant, you really want help with getting program supplies, then Chat GPT can make a budget for you and make sure to add program supplies as a line item. So, it can review your grant and put in those line items for you and help you create a budget template, that you can then use to put in your real numbers. Also it can help with things like reporting. If you collect survey information from your program participants, you can add that in. If you have notes from your staff of how the program went, over the year. You can collect all of that information and then again, ChatGPT can work alongside you to synthesize that information in a way that really makes sense, for a funder.
What it can also do is help you find grants that are more aligned with your organization so there's more of a chance that you're actually going to get the funding. Chat GPT, for example, can do things like review the grant guidelines, review a funder's website against your website and say, hey, like this is a good fit. This is not a good fit. I wouldn't suggest applying for this. It can help you even determine which grants to apply for, which is really helpful.
Mary Barroll: There are AI tools and platforms now, to enable so many different tasks. There are tools to help speed up your grant application process and there are tools to help your organization make videos. Dan Kershaw shares how generative AI’s video-creation abilities have had a measurable impact on Furniture Bank’s services.
Dan Kershaw: We all are creating content material that we want out there. We want the world to see. We want Google to see it. We need all of that. So around us, all of the algorithms are there that you really need to be video first. All of your content for best practice should be on YouTube, should have supporting web pages, it should have infographics.
The old way, for me is to take an idea that I have at 6 in the morning, when I'm having my coffee, and turn it into a video. It would stop at my idea because I'd get so far and then the day would start
SFX: Sound of alarm beeping
and then the hands that have to be touched, the systems that have to be touched, the people and the processes and the budget.
As you know, producing video is not like that. But one of the things we just did recently, and, in part, it was an experiment, was could we do a one-hour webinar, edit it, its brand align, turn it into a blog post with supporting pieces, and post it on social media. And we did it in three hours. For me, to be able to take something that I think is very important, to share and create it, in a way that it is accessible, essentially within the morning, is profound.
Mary Barroll: What is truly profound, Dan Kershaw emphasizes, is how much time using AI tools can save for nonprofits, so that humans can have more time to do higher impact work.
Dan Kershaw: If you talk to the next 100 nonprofits, how many of you are ready to create videos every day and publish it, using best practices? 100% would say no. And yet, the very same tools we use, you could. And we're now able to create evergreen content that the world can consume. Whereas historically, we would either not have the skill sets or the budgets. So, we're really unlocking the potential of our team. So, we've got people that are starting to see the other side. Can we get a license of, we use a product called a script. Is it free? No, but does it let us do some amazing things very quickly? Heaven's yes. And so now we're getting requests of here's a pain point and then we go experiment. Where a lot of organizations start is there are things we all hate to do. When you really dig in, there are steps in our jobs which really are not fun. There is no actual conversation about making the work decent. So, we're looking at leveraging AI tools and systems and processes, so that the automation that we're doing can free up that time to do the higher value and the higher impact work.
I think we're now producing videos for the website in under an hour. Things that normally would have cost eight grand, the old way. So, there's a lot of opportunity. When you're a listener, you're probably thinking, you're taking work away from the videographer. I'm not. I would never have money to hire a videographer. And it's an important distinction. We all don't have all skills. AI gives us skills. I can now create videos. I can now translate. So, it gives the opportunity for organizations that never thought they could do something to explore whether they could and should.
Mary Barroll: Furniture Bank has also gotten help from AI tools to use best practices, in its approach to fundraising and donor engagement. Dan Kershaw explains.
Dan Kershaw: I'm 11 years young into the charity space. I arrived. I grabbed all the books. I read them all. I'm talking to the 85,000 small organizations that are really just volunteer-driven, no budget, and trying to change their world. How are you meant to find your funders, your fundraising dollars, when you're not using best practices? And the old way, you would never have the skills or the tools to do that.
So, in our case, it's streamlining, again, back to stripping out, that not-high-value work. Like, you know, you chase 200 grants. They all ask the same questions, 200 different ways with 200 different word counts. So, for us, we have built up a knowledge base of what is true for us. And we've used as many words as we want to, to describe what we do. And we have an assistant, an agent, that will help map. You know, if they want a mission statement of Furniture Bank, you have 246 characters. Go. Next application. That frees up time for us to do higher value work. Individual donations. The majority of our donors are actually in-kind donors. And if our gang came to your home area and we took away your dining room table and the like. I know you gave me a dining room table. In best practices, I should be responding to you and talking to you about your gift. And now, we're in the process of building that, that we'll really be able to personalize the communications that we're giving out, to speak to each donor, based on how they've engaged with us. We need to be clear with what you want to do, but we're now in a place where the capacity is there that if you want to learn it, you can.
SFX: Bird sound
Mary Barroll: Each nonprofit organization, depending on its mission, its size and its capacity, needs to decide which AI tools might work best for them. This is what Anne-Marie Newton at the CAMH Foundation recommends.
Anne-Marie Newton: I think it really does depend on your specific situation. Thinking in your own business, around what is sort of a low stakes, maybe high visibility project where you can address low-hanging fruit. For us, it's going to be, perhaps, some things around how we craft really short, spiffy messages and using that as a baseline, based on personas that we're defining, that we find in our donor base, and using that as a launchpad. It is definitely not a replacement for the creative brains that we have on our staff. It's not a replacement for really thinking carefully about authentic engagement with our donor base. It's just giving us some advantages on thought starters that we can then adapt and customize as needed.
Mary Barroll: Anne-Marie Newton explains how CAMH uses AI to write messages to donors, based on defined personas, helping to craft the tone and language that best engages a particular donor with specific characteristics.
Anne-Marie Newton: So, the personas are a much deeper conversation we're having internally and externally around what are the archetypes of the people who give to us? And if we are going to think about how to speak authentically to them, and it's the people who give to us and the people we want to give to us, of course, if we want to engage them in a way that makes sense, can we make sure that the prompts, that we're putting into an AI model, speak to that persona that we want to be speaking to?
So, they can help us with tone, even if we don't have someone on our team who can think through how to speak to, I'll use one of my kids as an example, like a 16 year old boy who is really into rock climbing. That level of specificity can be really teased out, in some of these AI platforms. And that's where we're testing and learning how that rolls out.
Mary Barroll: In addition to helping craft donor-specific emails, there are some other behind the scenes applications for AI that Dan Kershaw thinks non-profit leaders should know about.
Dan Kershaw: HR manual. We all have them.
SFX: Sounds of large sheets of paper being turned.
They're all 60, 80 pages. Nobody's really actually read them. And inevitably somebody says, what's the policy on X? The old way is that you find somebody who's kind of read it and they go find the answer, but you don't really know. Today, in our case, we're on the Google workspace and we have access to a product called NotebookLM and everybody would have access to this. It only looks at what you give it. So, we fed it all of our HR policies, and we turned on the share within our org. So now, staff have a question. They will instantly get the answer without having to find somebody and or reference exactly the clause with the exact quote.
Another thing, all these AI engines, they can see, they can hear, they can talk. So, Toronto has over 200 languages. We don't have 200 translators. So, we've experimented. This is very alpha, very experimenty, but, notionally, when our volunteers take a family onto the floor, the ChatGPT experiment is what language are you speaking? And I speak Malay, the GPT will say whenever you hear Malay, say English. Whenever you say English, say Malay. And that suddenly allows your retired schoolteacher, who's a volunteer, who comes in on Tuesdays and really just wants to work with families, but unable to communicate with 60% of them effectively, suddenly can. Is it perfect? No. As we all know, the first time we make AI do anything, it's never right, sometimes not the second, but we're always iterating. Mary Barroll: Indeed, it’s important to keep in mind that AI tools are imperfect and the results they produce for you are based on what information you feed them. Jessica Vestergaard makes this analogy.
Jessica Vestergaard: I always like to tell nonprofits to think of Chat GPT like a calculator.
So, a calculator is only as good as the data and the numbers and what we put into it. If we put in the wrong numbers, if we don't put in any numbers at all, then we're not gonna get a good answer back. It's not going to be complete or correct or something that we can work with.
And the same thing is true for working with Chat GPT. So, as you get more advanced, I would encourage nonprofits to play around with adding as much context and as detailed as possible. Do things like attach documents. So, if you have a case for support already, or perhaps you have a website that you can pull from or maybe you have previous grant applications that you've written. You can do things like add that information along with your prompts to make sure that you're providing context to Chat GPT so that what it gives you back is just a lot more fulsome, and you'll just get way better answers.
Mary Barroll: But this idea of giving Chat GPT information – information that may be confidential – is precisely what makes a lot of nonprofit organizations reluctant to embrace these new AI tools. Their concerns are legitimate, and Elena Yunusov sums them up well.
Elena Yunusov: One of the biggest challenges that I would identify is about data and the dangers and the necessity of balancing data collection with privacy preserving concerns and ethical bias related concerns. It's a twofold challenge. On one hand, you would want to maybe minimize data collection because if you are serving vulnerable populations, maybe you don't want to collect as much data about them, and so, from an intent of protecting the vulnerable constituents in your purview. On the other hand, of course, by not collecting information about them, you may be in danger of perpetuating biases because with AI feeding on data, you can't, then, offer that data to AI systems. And therefore, there is a danger, then, of biasing service delivery, based on the previous data that you have collected. That's in the AI space called data deserts. That is a concern because we do not want to perpetuate inequality with these systems and tools. And that does go back to the types of data that is available internally in large organizations and how it is being collected and how it's being stored and shared
Mary Barroll: Elena Yunusov highly recommends that organizations start learning to use the AI tools now rather than later, and, in so doing, help contribute to the process of working out the ways AI can be used responsibly and ethically.
Elena Yunusov: Unfortunately, I think we're at a place where not implementing these tools won't be an option for much longer. And so, we're very excited to work with organizations that are ready to chart the course for others, because I think the more we sit on the sidelines as a sector, the higher the costs of that will be.
I would rather we do this now, for essentially pennies on the dollar, and learn it and then have the systems that can be evolving as the technology evolves. There are many other challenges that we've seen people articulate, and I am happy that we have a program now where we can hopefully address them in some sort of systemic way. While it's not perfect, we do need to, first of all, be sophisticated users and adopt it responsibly and ethically, using best practices, but also help shape the development of technology going forward, to bring that missing voice, the civil society voice, to the stakeholder table where AI technology future is being shaped.
But the data part, extremely important. And you don't want to bias against the vulnerable populations in how you collect all of the data sets and for it not to amplify inequality. So, that part is extremely important. And of course, I would insist on, if possible, involving them in the system design, as well, whether it's through some sort of feedback loops or design shares or some other types of consultations. And of course, being transparent about the systems that are being built, if they are going to affect the level of delivery or some kind of service delivery so that people can thoughtfully opt out if they are concerned about the data collection, for example. For there to be a power balance, for there to be a voice, internally within organizations, for different functions, but also externally for who they serve and the stakeholders and members and constituents and clients, for it not to be a closed-door exercise, for there to be a transparency throughout. I think that would be a human-centered design principle. And of course, AI tools are not done once they're launched. There needs to be an ongoing monitoring and evaluation of the systems in place. And of course, that needs to be built into the transformation plan. And there needs to be a voice for people who the systems are affecting. Should there be any signs of any inadvertent second, third order consequences, for that to surface faster.
SFX: High-pitched digital sounds.
Mary Barroll: One of the nonprofit sector’s major ethical concerns is just how to use data responsibly. Jason Shim explains what happens when we share information with AI. Jason Shim: I remember many years ago, when folks were starting to get online, that a general thing to keep in mind was, don't share anything on the internet that you wouldn't want posted on a bulletin board. And so, the considerations around sharing private information, with a public language model, is a variation of that. So, one of the things to consider is that if you're not paying to use the product, you may be the product itself.
If you are copying and pasting private information, from your organization, into a language model, like the free version of Chat GPT, there is a potential risk that what you're copying and pasting may be used to guide the training of the data sets that are used to generate responses. The risk is that, at some future point, someone may pose a question and bits and parts of the data that you've previously copied and pasted into the language model may appear in a fractured form. And there have been proof of concepts of folks who have put together research around generating prompts to exfiltrate data that may have been used in some of the training data sets, as well. So, there are some organizations, in the technical space, that have outright banned their developers from using some of the language models, for fear of their private, proprietary code being exposed to the language model. There are ways to mitigate against that, and one of the ways is to make sure you're reading the fine print in the user agreements, and many of the models have paid or enterprise tiers, that if you're paying for the product, in the fine print, there may be something that explicitly outlines, we will not be using your queries to train our models as well, thereby ensuring that the data that you are putting in is protected. There are other ways to protect your data and that may involve having a more technical implementation, that involves having a standalone model that isn't connected to the broader internet, that is self-hosted. It does require more technical expertise, in-house, to make that possible. But I'm putting that out there so that folks are aware that there are other ways of approaching it, that it doesn't always have to be the big models that are out there, that you can download models, put it in a decently fast computer, and you could even host it on a pretty fast laptop or desktop and completely disconnect it from the internet and interface with that. Again, that does require a bit more of a technical lift, but it is possible, especially if folks are concerned about data privacy and security. So, there's a few different ways of approaching it. The most important thing to keep in mind is what I just described. If you're not aware about some of the fine print of the free models, well, now you know.
Mary Barroll: In addition to learning to safeguard data, there is another important issue to watch out for: AI-generated misinformation. AI tools can create what are known as hallucinations, which are essentially AI-generated outputs that, are in whole or in part, simply made up and not grounded in fact. Elena Yunusov comments on this phenomenon.
Elena Yunusov: It's called hallucinations, but that's a really funny term, I think, for technology. Where it gives you incorrect information and it can argue with you and insists that it's correct, it's not. And so that can be really off-putting, speaking of trust.
Mary Barroll: Jason Shim explains how these hallucinations function.
Jason Shim: One of the risks with AI is that the algorithms on which it is built, it is essentially a really, really good next-word filler. The example that I often hear is if you were to say Mary had a little … lamb, most people know how to fill in the next word. And imagine the AI is really, really good at that and can fill out entire paragraphs or sentences or entire documents. Now, the trouble with that is that, most of the time, it'll spit out something reasonable sounding. However, there is a risk in that because it's constantly generating things that are reasonable sounding, that it may be very confidently incorrect in some of what it generates, with regards to hallucinations. That's why it's really important to have things like Human-in-the-loop system that is also reviewing the content that is being generated, to make sure that what is being generated is actually grounded in truth and actual fact.
Mary Barroll: Human-in-the-loop or HITL can be defined as a model requiring human interaction and oversight. This is obviously crucial in the nonprofit context. CAMH Foundation’s Anne-Marie Newton speaks to the importance of human quality control, to troubleshoot issues around accuracy.
Anne-Marie Newton: What I really want to make sure, of course, is that donors or the people, the stakeholders that we are speaking to, feel seen in the way that they want to be seen by us.
And so, with quality control measures in place and just making sure that we are not putting inaccurate things into letters, I think, we really need to be very vigilant that we're not putting out inaccurate information or anything like that. So, everything that gets generated by AI is going to have to be reviewed by a human and it's not an automated process, at least in these early stages. We just need to be extremely, extremely careful about how we deploy it and how the data is reflected back to our communities. And the reason it makes me personally quite nervous is I have actually two examples from this week alone. We issued an RFP for some consulting services and got responses back. One of the submissions had a section about CAMH. And it was clearly written by Chat GPT because all of the information was wrong. Our founding date was wrong. The number of countries we partner with wrong, wrong, wrong. So, it automatically negated the trust we would have in that company. And so, they will not be getting our business. But I actually got a call from a donor, who I'm very close with and who uses me as a bit of a sounding board in her philanthropic, efforts across the city. And she had gotten a letter from an organization that just didn't see her, in the way that she is impacting that organization. It just left out some incredibly important things about what she does for the organization. And that is, I think, the worst possible outcome we could have if we don't think very carefully about how we deploy these types of tools. So, just recognizing that, in everything that we do, technology and AI especially, are tools and they are one of many, and they are not panaceas for all of our work. It's an assistive device. It's not a replacement for our accountability and transparency.
Mary Barroll: Organizations should always have someone verifying any AI-generated content. And there are ways to improve the accuracy of the content that AI generates for you. Jason Shims explains.
Jason Shim: When it comes to developing prompts, there are certain ways that you can phrase things to ensure that it's going through a step-by-step procedure to think through things carefully. One of the ways is to ask the AI to conduct a certain task and then ask it to show me, step-by-step, how you are thinking through the problem and show me all your work. And so that will encourage it to show the step-by- step thinking, as to how it arrived at its conclusion, as well as the relevant citations.
SFX: Sound of bells
Mary Barroll: So, there are ways to prompt your AI tool to generate the best results. There are also different AI tools to use for different kinds of needs. Tim Lockie puts it this way.
Tim Lockie: Think about what outcome are you looking for? And the more clear you can be about what it is that you want, the better it's going to be. When I think about AI, I think about it as a drive-through restaurant, like Mark's drive-through, right? And if you drive up to Mark's drive-through and you order “food,” you'll get food, when you go to the window. And it might be like a banana and yogurt. It's going to be something. They're going to choose for you. And who knows how good it's going to be. If you order, I want a double burger with bacon, barbecue sauce, grilled onions, raw jalapenos on a whole wheat bun, with curly fries and barbecue sauce, you're gonna get a very different experience than if you just order “food.” And one of the things that happens is that the more you order, the more you know what you like, the more you know how to order it. And the more you go to different drive-throughs. If you go to Taco Bell, and you're trying to get a burger, it's not going to go as well. So, knowing, when you go to Copilot, you’re going to ask Copilot certain types of questions that are probably going to be more pointed, they're going to be a little bit easier, they're going to be less creative, they're going to rely on other work, documents and files that you have in your system. And that's what it's really good at. If you want somebody to act more like a personal assistant, you're going to ask Chat GPT. That's because they are different drive-throughs. They offer different food. I love Claude for a writer. I think it's the smartest one. So, if you want a really good writer, that's a great one. Really, really good answers. Chat GPT is a great virtual assistant. I use it as a thought partner, all the time. So, some of it is knowing which ones to go to, for what, and then also just being aware what options you have, when you're ordering. So, I want a summary. That's very different than I want an outline, which is different than I want a poem, which is different than I want a table. All of those actually make your prompts more rich.
SFX: News buzz News clip “It’s an agent that can interact with the web and get stuff done. I give the agent access to my Gmail and my Google calendar.”
News clip “Have you interacted with a conversational AI before?” “So, which aspect do you think is the most important when interacting with a conversational AI?” “No worries, just share what’s on your mind.” News clip “I think ethics becomes more important, as something becomes more impactful. And as AI becomes more impactful, the more that we have to think about the ethics of AI.”
Mary Barroll: Many nonprofits still have concerns about whether using AI-generated content is ethical. Anne-Marie Newton shares her thoughts on how she’s come to think about integrating AI responsibly into operations, especially in a healthcare context where ethics, equity and trust are so critical. In beginning to responsibly implement AI into its operations, the CAMH Foundation has had the advantage of being able to benefit from the hospital’s experience with it.
Anne-Marie Newton:
We are very lucky, first and foremost, to have a hospital who has thought through the ethical use of AI, on their side. So, we very much draw on their ethos and policies around all three of those things: equity, trust, transparency, accuracy. So, we have made sure that our policy around AI and its use mirrors that of the hospital and that, as you can imagine, adheres to the highest degree of privacy, trust, all that kind of stuff, with patient data. So, we're lucky to have that as our guiding light in the background.
Mary Barroll: Here’s Elena Yunusov’s perspective on how organizations can comfortably make the shift towards exploring and embracing AI. Elena Yunusov: I think nonprofits are amazingly lucky from a change management standpoint because they are mission-driven organizations. Tying it back to the mission, I think, helps people understand the role that they can play in that change that organizations will go through. Now, if people opt out of that change, it will, of course, be individual, case-by-case basis, within specific organizations. But I hope that it can be resolved by anchoring things, at the mission level, and looking at how AI can support and supercharge that, and address the fears and ethical concerns grounded in that, and in the industry best practices that do exist for organizations to have certainty, and the level of assurance that they are doing it best they can do, and involving people in these conversations internally, involving them in the design of these systems, in these decisions at variety of levels. Because, even from a use case, discovery standpoint, from very early days, doing so in isolation, in the boardroom won't get you the diversity of points of view that will surface the use cases and ultimately strengthen the organization. And so, I think it's an opportunity, more than the danger. It's just that you have a once in a lifetime opportunity to really engage people in something extremely meaningful that will touch them, in a variety of roles that they sit in and have a way for them to be heard, whether it's through a use case discovery, through testing and validation, through the building and implementation. There are so many points of intervention for people that I hope ethical concerns can take somewhat of a backseat.
Mary Barroll: Tim Lockie agrees that, when it comes to ethical concerns around adopting AI tools, organizations may be so focused on the risks, they may overlook the benefits.
Tim Lockie: Part of responsible use, maybe even ethical use, is finding ways that you can do new things that you couldn't do before, that help populations that you couldn't help before.
I think one of the pitfalls that we can fall into, as a value-based industry, is to worry so much about the ethical side of risk that we don't think about the ethical side of benefits. One of the first things I want to say is we need to balance out the risks with benefits and say, it is important to look at both sides of that.
The second thing I would say on that is, to understand where there are societal and ethical risks, that no matter what you do to participate in AI, your organization and your participation will not make an outcome or an effect or change that reality. The work I've done in creating this framework has been to decide the difference between things that we need to accept, things that we need to have the courage to change, and the wisdom to know the difference. And that is what is so challenging about these ethical dilemmas that we find ourselves in. One way to say it really simply is, where there are ethical considerations around AI, like implicit bias and training models, which you can't do anything about, we can still advocate for that.
For leaders especially, we need to focus on the ethical risks and concerns that are inside of our organizations that we can do something about, and we need to focus on those, first and foremost. And I'll tell you, one of the most important ones is two-factor authentication. We're really worried about data getting into training the models, which is actually much less of a risk than just cybersecurity, of people reusing passwords and not turning on two-factor authentication. So, we need to also ground our actions in real actions that we can take, that are going to mitigate risks that we can change.
Mary Barroll: Dan Kershaw echoes the sentiment that ethical concerns, though important, should not deter organizations from engaging with AI and benefitting from use of its tools.
Dan Kershaw: I was on 36 panels last year. The word ethics came up in every single one of them. Today, I'm going to be contrarian. Today, it is unethical for charities to use ethics as a reason not to engage with it. Now, that being said, the ethics of AI is, the models are trained on what has been out there. So, you have to be mindful of its training. Until you start getting into the weeds about the ethical conversation of AI, it's very hard to explain. There are safety protocols. We don't put donor information into AI. So, there are safety and security aspects.
Every model is different. There isn't just one AI. There are hundreds of AIs. The most popular ones being OpenAI's, ChatGPT, and then you have Google's, and then we could go on for 200 more. So, that is where the dangers are. And that's part of the reason why I often say pick a major for your organization. Choose to be in Microsoft. Choose to be in Google. Choose to be in OpenAI. Those organizations, the reputational risk to be bad actors is much higher than picking any old AI engine. Because you don't have the transparency.
Mary Barroll: Dan Kershaw also shares some advice about how to safeguard against some of AI’s weaknesses.
Dan Kershaw: In the same way you always review your policies. You'll want to do those things to make sure you're on side. We're looking at just how do you generate videos. For videos, make sure you can tell it's AI. Stylize black and white something, because I want our staff, our volunteers, the clients who choose to be in our cameras, to be this high fidelity of the real. But again, making that decision for your organization, you can't engage with those ethical conversations until you have started walking in the waters to see what fits your culture, your organization, your history, because it is going to be different for many organizations.
Mary Barroll: As organizations consider wading into those AI waters, I asked Tim Lockie to speak to aligning AI’s immense power with an organization’s values.
Tim Lockie: Unfortunately, you don't get an option to live in a world without AI. It's going to be here forever. It is like the invention of money, the invention of the internet. They're so big, they're not either good or bad. They are just significant shifts in technology that are here to stay.
On the human side of my work, the thing that I stress the most is I want people to have the tools to choose the level of participation that is in alignment with their nervous system, their ethical beliefs, and what they think is good or bad. For some people, that means they're just not participating. I support that. What I don't support is people not participating or avoiding it without really understanding what they aren't participating in and what they're avoiding.
Mary Barroll: For those who are reticent to participate in the new world of AI, Tim Lockie addresses concerns about the values and systemic biases incorporated into AI.
Tim Lockie: The way that bias shows up in these systems is important to understand. AI is just math. It's just math that is running, lots of times based on lots of historical data. And this is one of the only places that they’ve found where there's no evidence of diminishing returns on the level of intelligence and quality of intelligence, if you give more data and more tests to an AI, the machine learning keeps learning more and more. That learning had to start someplace. And so, it started where it was the easiest available to find, which is all digitalized information, which is basically the internet. And the internet wasn't developed to be unbiased; it just was unbiased because of privilege. And that's a reality. And so, I think I want to remove the idea of intention from this and focus on that as an outcome, because I feel like that helps the conversation move forward. Where organizations will run into this is where they are working in areas that have had less documentation, traditionally. Things like finance, science, business, all of these things have had so much writing around them that it's really easy to find data for that. The nonprofit space has not had that same level of documentation. So, it's already harder to find that kind of data for good training.
So, one other element to this is, it's not just that the bias is implicit, it's that the data is incomplete, as well. And so, I think that that's important to understand. What safeguards can you put on that? Read everything before you post it, is one of the most important things. I also feel like it's important to say, humans are rife with inconsistencies and opinions, and we hallucinate intentionally and unintentionally, actually more than AI at this point, and that that's part of what makes us human. It is not that we're trying to be perfect and that humans are good and AI is implicitly biased. Humans are implicitly biased as well. Some of the ways that we're implicitly biased are actually making the world better and some of them are not.
Mary Barroll: There are many reasons nonprofits may still be reticent to integrate AI tools into their operations, and for some, there are also real barriers getting in their way. Jason Shim weighs in on some of the biggest ones.
Jason Shim: Financial and resource constraints are seen as some of the largest barriers to implementing solutions around, specifically, the digital skills part of it. It's followed by simply feeling kind of overwhelmed. Folks are feeling that they're not sure how or where to start or progress. Interestingly, we've also received that feedback when it comes to issues around cybersecurity and how to advance that with an organization. So, the sense of trying to figure out, okay, there's this technology that has potential, but what do we do with it? With AI, it's appeared very quickly, and organizations are trying to quickly figure out what to do with it. How does it impact organization? How can we use it effectively?
I think that for AI to be implemented effectively within organizations, there are some other things for general implementation of technology that also apply. Things like ensuring that there are policies and governance. Are there policies to help guide the usage of technology? Are there policies around data, things like data retention, etc.? And the other is resourcing. Is there a dedicated role or function dedicated to technology? And does the budget reflect that dedicated resourcing for technology?
Mary Barroll: Here’s what Jason Shim advises for smaller nonprofit organization with limited capacity and resources, who don’t want to get left behind when it comes to existing in the growing AI landscape.
Jason Shim: Many of the pre-existing software packages that nonprofits currently have access to are steadily adding additional AI functionality. And so, as people are continuing to use these various software packages, they may see little AI pop-ups here and there. So, taking a moment to familiarize themselves with what is emerging there. That may be the form for many organizations, through their existing productivity suites or online tools in which they will be naturally using AI without necessarily calling it AI. I think that one of the challenges is the nomenclature of AI, in that, there are many organizations that were using things like transcription software, prior to the most recent wave of AI and generative AI and such. It wasn't necessarily called AI at the time; it was just called transcription. Many organizations and individuals, they already are using AI and are progressing just by what is already available to them, without being left behind. It's being able to adopt and adapt the technology but also being aware of the risks. And so, when that is established, then it allows an organization to move more quickly in that they’re aware of what are the risks that are out there because they may have developed an internal policy to help guide some of that usage. And that it can empower staff to quickly identify what are the technologies that may help them engage with AI and to improve, in their job roles. With limited resources, I think recognizing many of the existing productivity suites may have AI baked in. But if not, I would suggest examining what a paid account may look like, even if it's for a pilot, just to be aware and to get a sense of what the systems are able to do and ensuring that data protections are in place. The models are evolving and adapting very quickly. And so, even in the past month, there are capabilities that weren't possible three months ago that are suddenly available in the past two weeks. And so, having a paid account to keep a pulse on things can be one of the ways to, at least, to keep an eye out on what is possible and to continue to assess what might be possible for an organization to do with AI.
Mary Barroll: For large organizations, with greater capacity, here’s what the CAMH Foundation’s Anne-Marie Newton has learned about incorporating AI into operations.
Anne-Marie Newton: I think everyone has a little bit of concern about AI and how to use it. And so, I think really being clear and intentional around when and how we're going to deploy it has been a good conversation to have, in multiple forums, at CAMH Foundation. So, I think it's easy to think that this is just coming and we're just going to do it and it's going to be fine. I think, working out where people feel concerned about either the future of their own jobs or around the quality control, that we're going to really need to apply to this, to make sure that we are still reaching out to people, in a meaningful and authentic and accurate way, is something we will continue to manage, going forward. And I think that sort of echoes what's in the Zeitgeist as well.
Mary Barroll: Anne-Marie Newton also shares how she’s addressed concerns from staff who may be worried that the implementation of AI could eventually cost them their job.
Anne-Marie Newton: To that, we really have been clear with our staff with a written AI policy that our board has approved, around what this tool is and isn't for us. So, just being really intentional about how and when we're using this, how we will decide to scale up in certain areas, and the checks and balances for, if we need to course correct or change. How we're doing it, and the quality control measures.
I think going into it with a good sense of, what is your risk tolerance around any new tool adoption and just being clear what you can and can't mess with, as you're testing and learning from the process has certainly been the approach that we have talked about and has been used in the early days that we are adopting these tools.
SFX: Digital beeps
Mary Barroll: Tim Lockie thinks organizations just need to have more conversations about implementing AI.
Tim Lockie: The biggest issue in tech in general, is focusing on installing tech instead of having more conversations. And the highest misconception is to see AI as a technology and not a new conversation on tools that you can use. And I will guarantee you, organizations that increase the quantity and quality of conversations around the use of information and intelligence will outperform other organizations that may have invested more in tools, because what is going to matter, in the future, is your ability to adapt as a team. And that is really, really challenging. So, I think, hands down, the most important thing is to have more conversations, develop a very solid AI governance methodology and practical roles and responsibilities.
Mary Barroll: Those important conversations need to be had by organizations, in order to develop governance and policy around AI. And so as to provide their employees with clear guidelines around how to use these new tools.
Tim Lockie: Policy is the most important thing because policy is what gives people permission to use it. So, the first thing is, without policy providing permission for staff to use it, they can't move forward. In terms of how to shift your policy, don't focus on transparency of using AI, like where people are saying, if you use an AI for anything, you have to tell everybody. That's not true of Grammarly, right? Like if you use Grammarly, which is an AI tool, you don't have to tell everybody, hey, I made this document with Grammarly. Instead, focus on accountability. If you use a spreadsheet, if you use a word processor, if you use AI, you're accountable for what you produce, and you need to own that. So that's what I would say. Focus on those areas of policy. The second thing is use what you're paying for. And this is the big cheat code.
Every organization I've talked to is either in Microsoft or Google. And when people say, what should our base model for our organization be? It is either Copilot or Gemini, depending on which of those you have. And the reason why is you're already paying for them. You can upskill everybody on your team, right now, on the 20 things everybody should know how to do with AI. And you can do that with no extra cost and no extra cybersecurity or data privacy risk because you've already accepted the risk of having your data in their cloud. And their enterprise level AI security and the way that they train their models is already meeting the rigorous guidelines of what nonprofits need. So that's my big cheat code is use what you're paying for and let people know that they can use it that way.
Mary Barroll: In order to foster an AI-forward culture in their organization, Dan Kershaw says its up to leaders to create a safe space where their staff can experiment and learn, at their own pace.
Dan Kershaw: Whoever is a leader, they set the tempo and the tone. You have to create the conditions to encourage that learning. I’m a father of four, we homeschooled our kids. And I learned very early on that just because your kids, two of them are identical, they do not have the same learning style. And we learned, very early on, that just because you want them to learn the same way does not mean they will. And the same applies to staff. So, just because I say, just watch this video, just do this exercise, just use this tool, the leaders have to create the condition of a safe space to experiment and share and learn, in all the different ways. Or your AI roadmap will never take root beyond individuals. At an organization level, AI is a team sport.
And then there's people who are like, I don't use PowerPoint. Don't make me touch it. And the same is true with all of the different AI tools and opportunities. And that's okay. For the leaders who are like, no. No AI in our organization. Unfortunately, if you look at your stats, and I'm sure you have them, 50 to 60% of people are already using AI. So, they're going to bring their AI to work. And they're going to do it under the cover. They call it shadow AI. And that is dangerous because they can't talk about it, because they can't engage with their coworkers to understand what the best practices are, you run the risk that they're going to do something with your organization's data that you wouldn't consider safe, ethical, legal.
Mary Barroll: Tim Lockie reiterates that organizational leaders are best to give their staff permission to work with AI tools, because the risk is greater if they don’t.
Tim Lockie: The biggest risk is, if you don't give people permission, they won't pay for it. And when they aren't paying for it, they're going to find efficient ways to fundraise. They're going to find efficient ways to use volunteer management options. They will be efficient with the jobs that they have, and there's no way to be efficient with AI without putting in that data.
If you are not paying for models, if you're not saying, use these ones for work, they're going to be using free models and they're probably not going to be turning off the data sharing and maybe not even the data privacy issues or the cybersecurity issues. So, you're opening yourself up, in a major way, to not provide people a path to use efficient tools that they know how to use, because they're using it in their personal lives all the time.
Mary Barroll: Tim Lockie says once permission to us AI has been given, the next step is training.
Tim Lockie: The second step is to upskill everybody. Host a training, have the people that have been using it, whether they're supposed to or not, have them show everybody what they're doing with it, you know, and just take an hour, provide lunch, have people see what can be done with it.
If you want to go one step further and be more structured about it, hire someone like me to come in and do a training to say, here's the 20 things everybody should know how to do on Gemini. And here's the framework to understand how AI is affecting us and what we should do. Now, your turn, you explain the policy for your team. That's what I do for a lot of organizations, just to set the bar where everybody knows it. And then the third thing, and this is really important, don't get fancy with AI until you've done those two steps. And if you start by trying to get really fancy with a model and build stuff out, you're putting the cart before the horse. And that is where I see, time and time again, organizations have these big backfires. And if you work where you unlock leaders, upskill staff, and then upgrade your tech last, things will work really well. If you go the other direction, any other order, things will kind of fall apart, pretty quickly.
Mary Barroll: Dan Kershaw also advocates for AI training and suggests that this is also a way for organizations to give their employees an education that they can use throughout their careers.
Dan Kershaw: In the good old days, banks would say, we want to give you financial literacy training. My staff have told me; we don't want one size fits all financial literacy training. We would like an interactive one that is customized to us and speaks in our languages. But what we would like is, since these AI skills are some of the most in-demand skills out there over the next five years, we, staff at Furniture Bank and our social employment, would like to become AI generalists so that, if and when we leave Furniture Bank, we actually have AI on our resume that can actually help us get high paying, secure, valuable jobs, in careers, in the future. And from there, you'll find people. You'll find people like me that will talk about this all day long.
Mary Barroll: Elena Yunusov insists that nonprofit boards and leadership teams need to think of integrating AI as an organizational strategy rather as a tech issue.
Elena Yunusov: I go back and forth on how I frame conversations about this, because AI to me is not a technology. And I think that's really important to understand. It's a socio-technical system. And the social part of it is as important as the technology part of it. They go hand in hand. It's why our nonprofit, Human Feedback Foundation, is a socio-technical nonprofit and there is work that we do on the technology side and then there is work we do on the social impact side and they're both correlated. So, I think it's important for the leaders to understand that there are roles, very important, stewardship and governance roles for them to play that you can't give it to the IT manager, the way you would the build of a mobile app and say, hey, you go figure it out, you're an IT. And so the boards, I think once they understand that there is a governance responsibility and extreme risks that doing it wrong do present to their organization, once they understand that, then the conversation becomes very quickly about the how and what do we need to know because they understand that that lands with them, as do the executive leaders in the organizations. That you can't treat it as an implementation of a third-party CRM. It will have that component, but it has a very significant strategy and change management and designing it as you fly, this isn't technology we're talking about, we're talking about organizational change in future proofing organizations going forward. That is why we designed the program to have a course through TMU on governance and responsible AI for nonprofit leaders because we do need to move it at that leverage point. It's not enough to educate frontline staff and have them be literate about using the tools. In small to medium organizations, especially, boards and executive leaders play a really significant role, and we've also had to get the executive level support for the organizational AI adoption track. And so, that is something we've looked at and evaluated through the application process, whether that type of support and buy-in is, in fact, in place, because it's just still early days. We can't be at the place where organization is going through a massive journey on AI adoption and the board is not involved at all and hands off and execution is with someone at the senior level of leadership; it just won't work as well. And so, it is both top down and bottom up, and both need support and literacy.
Mary Barroll: Organization leaders have an opportunity to use the integration of AI tools as a means to creating an engaged and integrated working culture, says Tim Lockie.
Tim Lockie: When leaders are curious, they become a symbol of what's expected and accepted. And I think this is something that leaders miss, and they miss it with information systems all the time. People are watching the way that you think about information systems, the way you talk about them, and that creates the culture. And the culture is the most important integration source when it comes to these kinds of tools. Because when people feel supported and encouraged, they can engage. And engagement is what really creates that kind of efficiency.
Think of it this way, if you had one person or three people at the top that used AI, that would be good. Maybe you could get a lot of happening. But if you took everybody else in the organization and you just gave them basic literacy, they're going to integrate that into their workflows, and they're going to know how to integrate those better than anybody else. And if they do that, inside of tools you're providing, when they either move to a new position or transition on, they can effectively say, here's all the prompts I use. Here's how I do this quickly. Here you go. Here's the cheat sheet. And now someone else is working just as efficiently.
Mary Barroll: For any nonprofit leaders listening right now and feeling a bit overwhelmed, I asked a few of our guests to give us advice on what first steps to take. Dan Kershaw has this to say to nonprofit leaders listening today who are unsure about where to start with AI.
Dan Kershaw: We all watched Harry Potter, I think. And there's a moment in Harry Potter where Hagrid drags Harry to the brick wall. Harry has been told he's a wizard, but he has no idea what that actually means. And Hagrid taps on the wall, the bricks slide past, and suddenly Harry is on diagonal. Now, why am I talking about Harry Potter? Because Harry is that employee. Harry is standing on the street, seeing magic in action. Hagrid is like, once you go through school and you see it all and experience it all, that's magic. But you, Harry, can't see, past it all, because it's all new to you.
So, I always say, just start. Pick a model. If you’re on Google workspace, you have Gemini. It is sitting right there. Work with it for a week nonstop as you're right next to you. Everything, try it. Learn it. See what it does well. See what it doesn't. Practice and play with it. Or go get a paid version of Chat GPT. Chat GPT and Google, it can see, can hear, it can talk. And just start using it. There's translation. There's so many things that AI can do. It's so much more than chat.
Mary Barroll: Here is Anne-Marie Newton’s advice for other nonprofit executives or leaders who want to embed AI into their strategy and into their operations, but don't know where to begin.
Anne-Marie Newton: Finding the small percentage of folks who are really into it and want to test it out in pretty safe areas of your business, where the stakes aren't super high and you can kind of see what it does, certainly has been the way that it has evolved, at our office. And that's great because you get those early champions who get other people excited about what this tool does, and other people get on board. So, I think just recognizing there's always going to be a phased approach to change management. This is what this is. It really is a change management practice.
And so, bringing all of those guidelines into place and just making sure that people understand how this can make things better for them and why it's actually to their best advantage to get on board with something like this is certainly a practice I would recommend. Pre-pandemic, everyone thought remote work was going to take a long time to deploy. And in a weekend, pretty much every organization figured out how to pivot and adopt it. So, I think thinking through what was successful in that experience that all of us had, at the same time, and what in your culture did and didn't work and thinking how that can be applied, in this instance, is probably helpful as well.
Mary Barroll: Jessica Vestergaard’s advice, when it comes to integrating AI into work, is to take it slow.
Jessica Vestergaard: Start small, start on those low stakes’ tasks. You get to know AI and how you want to use it. Do those little things like write emails, create little task lists and things like that, and familiarize yourself with AI before you start jumping in and using it for way more complicated stuff like grant writing.
SFX: High-pitched digital beeps
Mary Barroll: As nonprofits tentatively get to know AI, they can always reach out to the Canadian Centre for Nonprofit Digital Resilience or CCNDR and ask for help, says their CEO Jason Shim.
Jason Shim: Our role is in helping to develop, share and support resources around AI for nonprofits who are looking to engage and learn more. Some of the work that we're doing is supporting the future proofing the community service workforce project, which is focused on identifying and addressing the digital skills gap in Canadian nonprofits. We're also currently working on a project around an AI Impact Hub, which is set to be launched and will provide more specific resources for Canadian nonprofits to learn more and advance their knowledge with regards to AI and how it can help their organizations.
Mary Barroll: As organizations take steps towards embracing AI tools into their work, the national pilot project, RAISE, is busy drafting a playbook that will address the numerous concerns of leaders in the sector.
Elena Yunusov: Our job is to deliver a playbook and the roadmap and the set of frameworks. Assuming there is no follow-on funding, the worst use case would be, great, we've worked with five large organizations, and they've gone from zero to one and we've upskilled 500 people. And that's amazing. But what have we done to externalize the learnings? What have we done to capture it and share it with the sector more widely? And so that is our role. And we are drafting the playbook right now. We're sending it for review to a multidisciplinary network of experts from privacy lawyers to nonprofit leaders to AI experts, policy experts, etc. and data leaders for us to then deliver a draft that will iterate on and augment with case studies from the five large nonprofits that are going through the organizational adoption track, but also from the experience of just doing the program delivery through Team U and hearing the types of questions that will inevitably come up and finding ways to address them where we see a pattern forming and codifying that through the playbook, as well, for it to be useful to organizations, no matter the size.
We have started working through a document outlining for what the phase two of this project could look like. And I ask organizations and invite them to reach out to me through either LinkedIn or email at elena@humanfeedback. io because we are looking for thought partners and funders to support the development of that phase two.
Mary Barroll: The advent of AI presents a potentially transformative opportunity for Canadian nonprofits to amplify their missions, serve more people and achieve social good at an unprecedented scale. But only if its implementation is approached thoughtfully and strategically, and with an unwavering commitment to ethical oversight and human leadership. With much to contemplate, here are some final thoughts from our guests.
Jessica Vestergaard: I can't stress this enough. The more context that you provide, the better. The more context, the more information that you provide ChatGPT, the better result that you are going to get. It's only as good as the input that you put into it. And it's a collaborative process, not AI writes the grant for me. It's we write the grant together.
Tim Lockie: AI is a nervous system technology. We don't think of it that way, but it really, really is. So, the first thing I would say is, if you're feeling overwhelmed, that's normal. That's a normal response. Just stop and take a deep breath. It's going to be fine. You don't have to get this right away. You can relax into this. If it sounds confusing, everything that you've learned in life sounded confusing at some point. So that's OK. That's part of the process. It's normal.
Jason Shim: I think there will be significant advances for AI that will continue into the next five years. And the role that CCNDR can play is really helping the nonprofit sector and folks working in nonprofits to feel confident in using AI among other technologies and also to help shape the direction that it takes. So, the sky's the limit and it'll really depend on how boldly we can imagine together.
Dan Kershaw: Get involved in the training. All of these models need to be trained. And as a sector, we have all of this insight and intelligence, around each one of our missions and to get that information into the training, in itself, will improve the models to be less biased, back to the ethics piece. Imagine if these models had all of the training that our sector cares about. The nature of what those models would be, how they would be responding, naturally would be different and for the better.
Anne-Marie Newton: I really want to be sure that we are still focused on the core competencies that our staff need and not relying too much on AI or technology to make sure that we're prepping future generations to be able to have human interactions. And I think that for me is that, for me, is always gonna be a push and pull. We can never abandon a tool that's gonna make us more efficient, that's gonna save us money, but really thinking through how we ensure that there is enough of a check and balance when people are being trained that they're getting the mix that they need to be successful in a relationship-based business.
Elena Yunusov: We need to have these conversations. And as a sector, have a coordinated strategy to then scale and build that capacity, in a way that makes technical sense, and in a way that future proofs organizations for what's to come. What are we missing? What are the pieces that already exist? And where do we see gaps and how we would address them systemically, so that we're not going through the same journey in isolation. And so, we're not leaving organizations behind without a roadmap. So very actively thinking through it. It's an inquiry that we are actively pursuing, and we would love to hear from people who want to support us, either as a funder or as a thinking partner or a delivery partner. For us to shape that strategy and vision together.
Music CharityVillage theme
Mary Barroll: Thank you to all our guests for their keen insight and wise advice. Be sure to visit our website and our show notes for more information on the resources, reports and programs mentioned in this episode. If you’d like to hear more of what our guests have to say check out our full video interviews on our website. CharityVillage is proud to be the Canadian source for nonprofit news, employment services, crowdfunding, e-learning, HR resources and tools, and so much more, please take a moment to check out our website at charity village dot com.
In the next episode of CharityVillage Connects, we’ll examine how growing DEI backlash and rollbacks in the United States, including among some multinational companies, are impacting Canada’s nonprofit and charitable sector. We also take stock of where Canadian organizations stand today in their DEI efforts: what’s progressing, what’s stalling, and what’s at risk. With candid insights and strategies for navigating our current social and political environment, this episode offers an important check-in for anyone committed to advancing equity in the sector. I’m Mary Barroll. Thanks for listening.