In episode 91 of Mission: Impact, Carol Hamilton and George Weiner discuss the intersection of artificial intelligence (AI) and the nonprofit sector. They delve into the significance of AI in the sector, emphasizing the need for strategic adoption and policy development. The conversation then shifts to the integration of AI into everyday tools They cover practical applications of AI and discuss crafting AI policies as well as the potential for AI to enhance nonprofit operations, particularly in grant writing and reporting.
[00:06:00] AI Adoption and Policy in Nonprofits
[00:08:00] Hype Cycle of AI:
[00:11:00] - AI in Everyday Life:
[00:13:00] - AI for Nonprofit Content Creation
[00:15:00] Crafting Nonprofit AI Policies and Guidelines
[00:16:00] - Integrating AI in Nonprofits:
[00:20:00] - Avoiding Overextension with AI:
[00:24:00] - Practical AI Tools for Nonprofits:
[00:27:00] Time Saving vs. Time Reallocation with AI
[00:29:00] AI in Grant Writing:
George Weiner is the Chief Whaler, WholeWhale.com. He co-founded CTOs For Good, PowerPoetry.org. He is a Dad, and a nonprofit geek.
Important Links and Resources:
Cause Writer AI https://causewriter.ai/
The Smart Nonprofit by Beth Kanter and Allison Fine
Click "Read More" for Transcript
Carol Hamilton: So welcome George. Welcome to Mission Impact.
George Weiner: Hey, great to be here. Thanks. Thanks for having me. We're doing the podcast swap. We're, we're living the dream.
Carol: I always start out the podcast with asking everybody each guest what drew them to the work that they do. What would you describe as your why or your motivation behind the work that you do?
George: My, my mom actually worked in public service her whole career, and ultimately when I did the math looking at careers, I just . Didn't get very inspired by the fact that a lot of jobs seem to just make rich people more money, and I decided to stay with nonprofits in the nonprofit sector. It's the only thing that makes sense to spend my time on.
Carol: Well, we're in agreement there for sure. At your organization, your, your company, whole Whale, you help nonprofits and social impact organizations with their impact. And you and I are both part of the nonprofits community, and I've heard you talk a lot about it. AI and nonprofits. So has definitely been on everyone's list for a while in terms of kind of, you know, trends to be looking out for and things to be paying attention to in the future.
But I feel like this year, it moved from the future to now, and I'm sure that was actually the case, like us who aren't paying that much attention. It feels like now, I'm sure it's been longer than now but what would you say are some of the main things that people in the sector really need to understand about ai?
George: I would say one is that regardless of the policy that you put in, if no one can use this thing, shut it down head in the sand or everyone use it randomly. Like there are perils at both ends and there's a lot of sort of. F corollary to early days of social media. Those of us who have been in the business a little while, remember that like, you're not allowed to use social media at work.
And you're like, okay, but I got this thing called a phone. I'm going to use it to do that. The truth is when a product shows up that is, you know, so good, so useful. You're, you're, you're truly fighting gravity. And so there's a certain gravity to the adoption of this where the distance between these AI tools and where they're integrated
And the tool you are using right now. You, you have a tab open. You're multitasking the distance between some AI as a service, which means an AI does a thing and the tool you are using right now is reducing by the day. And you can be a passive passenger on that trip. Or as an organization you can start having the conversations, bringing in the stakeholders and saying like , I understand that
The tool we're using just rolled this out, but let's understand that there is a Western European bias behind this particular LLM large language model that was just foisted upon our team before you hit auto complete the entire release post page content.
Carol: So one of the things that you've talked about is kind of that adoption cycle, or I. Maybe what people are fam some people may be familiar with or parts of kind of the innovation adoption cycle. But the one that I was interested in that you had talked about was kind of the hype cycle. 'cause I feel for folks who may want to put their head in the sand, and not pay attention to this and, and not think they have to deal with it maybe until next year or maybe till, you know, this particular fundraising campaign is done. Where would you say kind of what is that hype cycle with technology where people are just like, oh God, here we go again. and, and where would you say AI is in terms of that?
George: Yeah. Well credit to Gartner research for this gartner.com, but ultimately it's a very, you know, just to paint the picture, you know, on our y access, the up and down, we've got expectations. And then on the xxi on the bottom, like, we got time. And frankly, over time we get really excited about stuff. I can't wait.
It's gonna solve all the problems. And that's where we get into this like . Peak of inflation and excitement and like that, like that's 2023 that, you know, like in a nutshell,
George: what I'm starting to see for ai without a doubt as we're getting toward now the end of 2023 anything that, you know, burns brightly can't burn that long.
And so this excitement is now tilting toward, in, in this chart, the trough of disillusionment. Followed by the slope of enlightenment toward the plateau of productivity. Yes, very colorful language, but if you're visualized, we go high to low to then a plateau. What I'm trying to do is help organizations get into that plateau sooner because of the upside and for your employees with regard to productivity efficiency and being able to just do more with less.
Is tremendous if you use these tools the right way. And we're starting to tilt down toward this trough, which means we're gonna get a lot more of these negative press articles coming out, being like somehow at the same time, like, this thing is so silly, it can't do any of the basic tasks. I ask. And then also you're gonna see another article that says it's gonna take every job that it was ever invented, and you are all at risk.
You know, because papers have to sell money. And they do that by tacking to extremes. Yeah.
Carol: Well, I like that. Analogy to kind of the beginning of social media and all the ways that, or, or any technology that you can think of from the, I I mean, I've gone through many different technology adoptions over the course of my career and there are always, you know, the beginning is, yes, it's gonna solve all the problems, and then it's evil,
George: Yeah. No more problems.
Carol: We just use it. I'm wondering, like, to kind of ground this in where are people already seeing it? Where are they using it, where they don't realize they're using it?
George: I love this question because it's gonna make you realize that we've already been using this, or I should be saying it's being used on us. The power of these large language models have been a tightly held secret by large corporations that have built these things on our data for their purposes, and have helped their cash flows, their teams do things at scale, and now the wall has fallen.
And we as a collective have access to these tools. And so what does that mean? That means that, you know. Roughly 10% of every, you know, Google search has been assisted by these types of large language models that have given you an answer and let them profit from it. It means that every time you look at a Facebook feed or are a part of one of these social networks that are generating the next thing you should watch, there is an AI involved in that.
And even on our own like autocomplete of any sort of like, oh, thanks Gmail. I really appreciate you filling in the first name or pulling in this extra, what, how should I finish my sentence? And it's happened so quietly that we're like, oh yeah, but make no mistake, it's already been around. We are just more aware of it and have more access to it than ever before.
Carol: So you mentioned a large language model. Can you say a little bit more about what that is?
George: This is such a jargon heavy world, and I feel like it just puts a, like a gap between like, oh, am I, that sounds complicated. All of this we're talking about is, and I'm gonna try to do like no jargon whatsoever. They took a bunch of text, like all of the texts from the internet, like. a trillion words, and then they shove that through an amazing probability machine called a neural net.
But it's like what? Probably related to what? And it's incredible. It is like databases, it associates it, and then it's sort of like shrink wraps it into this little engine that can predict the heck out of whatever you give it and whatever should come next in a way that mimics human comprehension. In a way that can substitute now for a lot of our writing.
And so that, you know, is my like, that's pretty much what we're talking about,
Carol: So if I'm thinking back to, I remember there was a big thing around Google wanting to digitize like all books.
George: Oh yeah.
Carol: of this?
George: Oh, it lives in some corpus for sure.
Carol: Yeah. Yeah. And I'm just thinking like now, I was not necessarily thinking that I'd been using it, but then I thought, okay, well, when I first started doing my podcast, use Otter ai. I should have been
George: It's a good one.
Carol: It had AI at the end of it. Duh.
George: subtle. Really
Carol: to, to transcribe the interview. And now people have, you know, you'll see on a Zoom screen where somebody's logged in. Their AI is, their auto AI is logged in to get a transcript or notes from the, from the meeting. You know, now they have their own version where they'll give you. Notes from the meeting. And I'm, I'm just trying to think of like yeah, that the auto auto correct or complete. All of these ways, that, that. We've maybe started using something but didn't even realize we were using it and only are like conscious of it until we're, okay. I'm gonna download chat, GPT and I'm gonna put something into it. Now I'm using ai, but actually I've been using it for the last several years.
I didn't even know it.
George: Yeah, and to like, I just wanna come back to this, like, there's so much jargon in this topic and you're like, oh, I don't understand it. I can't use it. The amount of humans that can explain how electricity works is shockingly small. Shockingly small.
Carol: And we all use electricity
George: I'm a big fan, I've watched YouTube videos. I'm not confident in how electricity works.
Not fully. You'd say something about electrons, something about magnetic something or another. So I think there's a little bit of trying to understand it. What's important though is that what you just said is an exact reflection of the distance of AI being reduced between the tools we use.
And that's where I think it's important to be intentional in understanding its weaknesses and also how we get more out of it. You know, there's, you know, coming back to electricity, we all have access to it. Some of us do a little more with it than others. So just because it's on like running water and it really is AI as a service, just as we move through clouds as a service.
All of our stuff now hosted by Amazon or Google now, AI as a service is just gonna be layered into everything we're doing. And, and so again, if you're a leader right now, there's a certain inevitability to it if you just sit there. 'cause it's, it's coming inside of all of our tools.
Carol: So when, when you're working with leaders, what kinds of, and you say, you know, have the conversation about what your policy might be, what are some of the questions that you're having people just think about in terms of how they might use tools?
George: I actually built a free AI policy like coworker, but it's an AI chat bot that walks you through the various steps of creating an AI policy and it will ask you, and then it will generate your . Full AI policy, having asked you all of those questions so, you know, ironically using AI to help you, but as it happens, if it's an AI policy, who better to advise you elements in there that you're like, I think it's important to have is a non-starter for me, are the following.
If I'm an executive, I'm making sure the entire company knows that everything generated is a first not final draft. So a first not final draft policy. If, if your policy is nothing right now, steal these words. Take them as your own. Fully. Take them as your own. Everything you generate with an LLM must be a first, not final draft.
The other piece of any tool we design or any process that we have involving or touching any part or percentage of an LLM, must have a human in the loop first, not the final human in the loop, go play with the tools. Like those are some very important guards. There are more guardrails to have, but I know if I give too many, you'll lose the thread.
So please keep those. It will keep you out of all manner of harm.
Carol: Yeah, I mean, I'm just, again, thinking about how I've been using it, I've now, it, a lot of it has to do with, you know, generating a summary of a podcast interview or, know,
George: Love it. Yeah.
Carol: media posts, LinkedIn posts, you know, so I've actually gone and done those kinds of things, but definitely always the first draft. And then I'm, you know, reading through it, deleting, adding Getting a little bit better at, at telling it what to do, but but yeah, I definitely appreciate those, those two those two guardrails, are you seeing I mean from what I'm hearing, you know, it's gonna be in lots and lots of things, but from a nonprofit organization point of view, where are the places where you think it's gonna show up or be most easily applicable First?
George: I, I like the way you're asking that question 'cause it's not like, let's spread peanut butter across everything. It is. Let's look at the vertical and I think there's a lot of vertical opportunities. I've been doing a lot of training. I've actually been doing AI training, so I'm very like an AI primed.
And I actually started with this supermarket training. I like walking through every aisle of the supermarket being like, look at this box of, look at this thing. And you know what happens? Someone on the team's like, oh my gosh, that's the exact use case that I have in mine. So rather than me saying like, oh, use this or this, like, I'd encourage you to walk down that supermarket a little bit, talk with your team and acknowledge the following with any technological tool or product.
Involves actually the following components, the people, the process, and the product. And like it's not equal parts, it is people first, which I feel like you in strategic planning, like I don't need to say it. Then the process and then the smallest percentage is ultimately the tool, the product. And that is where you sort of kind of get flipped on the head.
That's the ultimate difference between, you know, this like, you know, techno. First thinking and non-profit, human first thinking. And so
George: That's where I live.
Carol: of when, when, what? I mean, okay, this will really age me, but I've already done it on this podcast, so it's okay. Organizations first needed websites. And they thought it should be in the IT department. And it's like, no, websites are a tool for communicating with other people.
They're not about technology. Maybe when they first existed, people needed higher level technology skills to be able to actually build them in the nineties. But really, you know, it took a while for people to realize It's a tool that we need, we want to use for a particular thing. In this case, you know, people know what we do, say who we are. that is really a, you know, a communication and marketing and fundraising function, not a a technology
George: Yeah, I mean, the realization that it was a medium was, you know, a, a, a full phase shift. And one of the things I'm hoping is that as I also sort of watched, I was actually . Prior to founding whole whale the CTO of do something.org. And I did a lot , sighting quite a bit. I also happened to run the IT department at a certain point.
Fortunately, we separated those functions over time. But the thing I have a hope for is that in the early days of websites, I think a lot of nonprofits viewed them as a brochure online with a donate button. And what I hope is that you don't look at it . You know, LLMs as, oh, here's a way to like, create a tweet and move on.
Like there's a lot that you can unpack inside of there. Not just, oh, cool, I made some tweets. You know, you already brought up doing Summarizations. The amount of organizations currently by hand, going through massive PDFs from grantees or foundations or you name it, and like passing and combining like
That is a great tool, like you just mentioned it, summarization. There's a whole ecosystem of an art of crafting those prompts and saving you time that lets you do that higher order level of work of like, Hey, now let me get into the nuance of matching our strengths programmatically with the requirements of this grant.
Carol: Yeah. Yeah. What are some other use cases that you're seeing? I know you said go through the supermarket and pick the different
George: Yeah, supermarket.
Carol: But I'm, I'm, I mean, I'm a newbie, so I don't even know what's on the shelf. I've never been in this supermarket
George: So look, you're . What is food? So let's just break it up. You know, starting, you already mentioned the like you know, text to text based generation summarization is excellent because you avoid what's known as hallucination basically. If you're asking for factual recall from an LLM, like give me the exact history of nonprofits since the beginning.
It's gonna lie one outta five times. I just dunno what time that's a dangerous use of summarization. Excellent. You've given it the entire corpus of information and then asked it to distill it and then you can work with it. So that's sort of summarization tools like . Open AI is excellent for that and thropic.com is excellent for that, for those large amounts of text.
There are also services out there like chat with PDF. Be careful with any tool that you use that you don't pay for, because if you don't pay for it, you are the product. It is being trained on your data as opposed to those other tools which have policies that state that they won't be using it trust or don't trust those policies.
Yeah, that's fine. The next category I would say to play with is,
Carol: About that. 'cause
George: yeah, yeah, yeah.
Carol: Just handing them all my information by using this tool and letting it summarize my things?
George: No. In short, when you pay for that API access, they have a public statement that they are not training their data set
Carol: One, I am.
George: You potentially are? Yes.
Carol: I thought so, but I was like, huh, I should probably ask George. That
George: That's a good thing to ask. Yeah, knowing their policies and also I think it matters to them quite a bit as they work with B two B cases and they try to monetize their tool, that kind of thing is pretty darn important to them. And even like open ai you know, I'll just not date this too much, but early November came out and said they have copyright shield now where they will literally
help you defend lawsuits against things that were originally created by their tools. So that's like, that's pretty, you know, pretty big nod toward that level of safety. The next bucket though is like text to image generation, text to image generation. And that's where you get models from OpenAI, like Dali, stable diffusion mid journey.
And like, don't get caught in the tool. Just understand that like you write text, it creates an image. And you can then use that image for ads, for your email, for your website, and it has the creative commons licensing on that. There are asterisks to get into, but what's interesting is like in your mind that might be like, oh, that seems like a quaint thing to do.
Like, what a cool thing to do. There are currently over 20 million images created by AI per day across those tools that I just rattled off
George: In short order, there are gonna be more images created. By AI rather than humans in the game. And I think that falls into the category of bigger is different, and I don't know what that means, but it does mean that you should be aware and potentially play with text to image.
And then rising also is text to video. And then of course, image to image, image to video. All of those layers exist. So again, when we're walking down the supermarket, one of those things we're like, oh my gosh, I didn't know that was possible. Someone's light bulb just went off. And by the way, there's also a whole ecosystem of pulling this into tools like spreadsheets and doing processing through spreadsheets.
So if you do a lot of spreadsheet work, GPT for sheets, and then if you are into the advanced category of automation, play with Zapier and ai, and the fact that you can have what are called agents, meaning that that AI can interact with the APIs of other tools you use. Put another way, I have a tool where I can email
I can email an AI and it will automatically create the Asana task with full details based on a long, laborious email thread.
Carol: Oh wow. That's when I would like How many
George: one, please
Carol: when was that meeting
George: that thread of the thread of death.
George: Yeah. No, I, that's an, it's like an email based tool that you could set up, we set that up with cause rider ai, we, you know, build a lot of these custom, you know, pieces when you're like, here's my problem. Like, great. It's not a one size fits all. It's unique to you,
Carol: Yeah. I feel like there's, you know, you, you talked about kind of the, the, it's gonna solve all the problems. It's doom and
Carol: gonna, there are gonna, you know, nobody's gonna have any work to do anymore. I feel like another one that's, that it's in there is kind of. This is gonna save you time. And then it's, and I think the, the piece that people assume, but I don't see ever happening is that when you save time, I. you're gonna work less or things are gonna be easier. And I've, I've yet to experience that because people just fill in the time with all the projects they had on the back burner. So how, how would you, it's, it is a time saving, but it's, to me it's almost more like a, like, it could be more of a time allo reallocation. What, what are you seeing in terms of that?
George: Yeah. You know, it's the age old, like, you know, the cotton gin's gonna put us out of business. The weaving looms are gonna put us out of business. Then, you know, it ends up creating more work. I use a lot of these tools and I find myself like, even though I can, I. Edit, modify, chop, and push content faster than ever.
What I've, instead of like getting the time back, I just 10 x the amount of stuff I create
George: And you're like, oh, wait a minute. So I'm like, I'm all, I basically just made the treadmill go faster. And that was what, what it was with me. Like I just filled that time with, with other things. The danger though, you kind of point out as a nuance is it actually ironically takes up more time 'cause you're like, oh.
All of these things are possible and suddenly you end up, you know, missing the rocks, you know, the fill a glass with rocks, pebbles, and sand. And if you start with the sand you know, IE playing with weird new AI tools and being like, oh, I can do it this way. And then going down that rabbit hole, like there's a lot more rabbit holes to, to waste your time.
So I would say that's the risk of. F spreading it like peanut butter across a bunch of different things, being I can use it over here or use it over here. I would say focus on that one use case, like pull on one of those threads that I threw out there. And by the way, like I didn't even get into the data analysis that I can do, or frankly the coding support.
But the folks that are in those fields tend to already be playing with it.
Carol: Yeah. I feel like one that I've heard a lot about has been in terms of grant writing, grant reporting, fundraising.
Carol: Can you say a little bit more about me? Well, number one, am I right if that, that's what I've heard. two how those folks would actually use the tools for those kinds of things.
George: Yeah, I, I'd say there's, you know, no substitute to playing with these tools for that purpose, but its ability to understand large chunks of text is awesome. And when you're dealing with a grant and also your proposal, they're possible to break that up. So, you know, we literally have built tools that will break down your initial whatever project brief and then format into your outputs.
Some guardrails on doing that. Like yeah, you could just upload a bunch of content and say, write this for me. What's gonna happen is it will do a bad job if you try to get it to output too much text in one shot, it gets tripped up. You'll do far better by chunking your prompts, meaning breaking up the task saying like, give me the executive summary of my project proposal for this program.
Here's some context of the program. Just write that for me. Great. That's done. Next phase. Give me the timeline and priorities and risks and opportunities for this based on this information. Great, thanks. Next chunk. So chunking and moving . I think using, even like OpenAI, is spot on. And with the new updates that they have, they have the ability to give it even more of your data in your own ecosystem.
So it thinks of it like a sandbox and inside the sandbox that the AI plays in, it only has access to your grant proposal, the summary of your annual report, the last, you know, 10 project reports that you have, program information, and they can draw on that and pull that into . What you're writing, like, and that's doable right now through open AI assistance.
George: Or you can pay for tools. So I know one out there is grant-able. They have a tool that you can use through LLMs that they need. Modification. We, like I just said, I, I built one the other day for our client on, on cause rider. So I would encourage you to just try it, try chunking it, and use OpenAI.
Ideally pay for the account. So you can use GPT four, which is the most advanced model. See what happens.
Carol: Any other misconceptions that you feel like folks need to Be disabused of early on as we're as this is coming at us very fast.
George: If you feel like you're behind, you're not, I. If you feel like you'll never appre, like get to expert level or like, oh, there's this guy who said all this stuff, and it was like, I felt like it was way over my head. Like, let's just be real for a second. ChatGPT came onto the market one year ago. So talk to me about someone becoming an expert, an expert in let's say prompt engineering or prompt architecture.
The person who installed it went back to electricity. The outlet on the wall that your device is plugged into right now. An electrician took like two to four years to get certified to put the outlet in. There are no experts in this. There are students and so if you're curious, like there's plenty of time to catch up.
The rules yes, are changing, but I think that's a mindset and a lot of folks that probably listen to your podcast have that growth mindset. So I think it's exciting because there's a lot of upside. If you're, if you're able to do it. But, you know, again, my concern is that nonprofits are that passive passenger again, you know, hey, the website is just a brochure with a donate button.
I would love to just skip over the trough of disillusionment and get right to the plateau of productivity for,
Carol: I don't know. We may, we
George: for folks doing good work.
Carol: I think the, the rollercoaster is always there,
George: It's, I, you know.
Carol: I don't
Carol: know what
George: I can dream
Carol: the trough of disillusionment about tomorrow, but I'm sure it's coming.
George: Yeah, you're right,
Carol: So as we close up on each episode, I ask each guest what permission slip would they give nonprofit leaders or what would they invite them to consider? I. For, for my podcast to avoid being a martyr to the cause or and as they work towards building a healthier and more equitable organizational culture. So either a permission slip or an invitation.
George: Yeah. Lot to unpack, but I'd say the permission slip would probably give leaders the, the space to learn and the, you know. Permission to be wrong publicly. I think it's an expectation one way or the other, that leaders are supposed to know which way to go no matter what. And I think one of the most powerful things you can do is, you know, the following words.
I don't know.
George: Then also I think a lot of leaders don't have the space to learn. I think that's you know, kind of something tough and has to. It has to be done with time. Like you, you need that space. So especially right now in a high change environment. Those are, those are some pieces there.
Carol: Well, those, that's wisdom for all sorts of topics, but this one in, in well. And definitely we'll be linking to the policy generator that you've talked about
George: Yeah. Yeah. Yeah.
Carol: can get started there and have those conversations. And really appreciate all the insight you've brought to this as we learn more and become learners. Well, one of them for me was just. You, you posted something on LinkedIn about an LLM, and I'm like, what's that? So there's a lot of learning to happen, so I appreciate it.
George: Oh, thanks for telling the story.
Carol: All right.
Grace Social Sector Consulting, LLC, owns the copyright in and to all content in and transcripts of the Mission: Impact podcast, as well as the Mission: Impact blog with all rights reserved, including right of publicity.