How the smart use of AI can help the philanthropic sector

 

Jonathan Waddingham

0

The revolutionary rise of generative AI tools this year has prompted a wave of innovation in many sectors with scores of AI-powered start-ups launching every day. But what is happening with generative AI and philanthropy? In truth, it seems like very little. This may be down to the fact that so many AI tools are freely available, and they can be used for many different tasks.

Take one of the most obvious use cases for generative AI in philanthropy: grant applications. The current process is often manual, time-consuming and relies on answering a different set of questions using variations of the same information. This is a task perfectly suited to an AI tool.  You can ‘feed’ a tool your internal documentation and then ask it to answer questions based on that information, to a variety of criteria (like specific word counts). Many charities we work with are already applying for grants with support from various AI tools.

On the flip side, one organisation we know has also seen a dramatic increase in grant applications, thanks to AI tools making the process easier, and is now rejecting generic applications that were obviously created with AI. And herein lies the rub: generative AI tools are great at making it very easy to create plausibly generic content. Their use needs to be considered, thoughtful and human-centered for them to be powerful in philanthropy.

How do you use AI and stay true to your story

On one level, any AI tool is only as good as the question asked of it. Ask one to ‘write my annual report’ and it may ask you to share some info about your organisation, or it might just make up some text that sounds like an annual report, using what it has learnt from its training data set.

This ‘plausibly generic’ content is dangerous on a few levels. Most importantly, the content is fabricated. And without knowing any context about an organisation, or having any references to source material, this fact is not necessarily obvious. See the unfortunate story of the US Lawyer using ‘legal precedents’ from ChatGPT.

Apart from that rather large red flag about factual accuracy, using an AI tool as a shortcut to creating some content does not do philanthropy justice. For time-pressed nonprofits, it might seem like a good idea, but there is a crucial first step to remaining authentic, and that is working on your strategy and building a story to support that strategy, before using that content as input for AI.

If you put bad data in, you’ll get bad data out. And by that I mean that if you have not refined your organisation’s story, or don’t ask the right question in the right way, an AI will not do your story justice. Powerful storytelling requires understanding of the audience you are trying to reach, the medium you plan to reach them on, and a clear articulation of your reason for being. These are not things an AI tool knows.

Whether you are writing your story for a grant application, or for a post on social media, AI tools are most useful at providing feedback on your content or generating a first draft. In either case, think of the human involved like the pilot of an aircraft, with the AI tool as the first officer that supports the pilot.

To explore AI’s potential, start with the problems

At Lightful, we started a cross-functional team dedicated to building our understanding of AI and how to build AI-powered features, which we imaginatively called our ‘AI Squad’. It is easily replicable: you form a group of curious people, give them some time and space to explore ideas, point them in the direction of known challenges, and let them go.

Crucially, we did not start with AI tech and think about how to apply it to philanthropy, but rather by listing the challenges the nonprofits we work with were facing and if or how AI tools could help overcome those challenges. In applying AI in philanthropy, you must start with the problem first.

This is not hard to do. Think of any task that involves creating content, map out the process, identify the pinch points, and there’s a decent chance that an AI tool could help somewhere. Individually, this may save a few minutes per task, but the efficiencies will soon grow and multiply.

AI and trust

Our approach to AI is grounded in building trust, building equity, and building responsibly. That meant we were also looking for any signs of bias through the prompt design process and ensured we collect feedback internally and externally from a diverse group of people. For example, in one test around creating donor personas, the AI tool mostly created personas called “Sarah” or “John”. This was a clear bias, which we only uncovered by testing the same prompts with many different inputs related to different causes.

This relates to legitimate concerns about what content was used to train the AI. In most cases, we don’t know what was in a training set which comprises the vast amount of content a model had access to when it was being programmed. This is particularly true when working in cause areas for under-represented groups, who are unlikely to be prominent in any AI training set. But until you start using the tools and applying them to more niche or complex tasks, and reviewing the outputs with a critical mind, you cannot uncover these biases or give feedback to the AI tools to let them know about the bias so they can correct it. That could ultimately become a self-fulfilling prophecy. If only those people whom the tools are biased towards are using the tool, it will only become more biased in time.

Like any technology tool, AI is neither intrinsically good nor intrinsically bad, but it’s the considered application of the tool that leads to productive or destructive outcomes.

Using AI in philanthropy

Ultimately, the understanding of AI tools and their use in philanthropy only comes from using them, and seeing what they can do. And more importantly, seeing what they can’t do. They do have limits, and it is possible to uncover some of those possible biases and make informed decisions about whether to use an AI tool.

It is also true that there are incredible benefits to be had from using AI tools to tell better stories, reach more people and save time. Our ‘AI squad’ approach has enabled us to develop new digital tools and learn the practical application of AI tools in a short period of time. Any nonprofit or Foundation can follow the same approach.

I believe that there is an enormous opportunity for increasing efficiency and effectiveness through the smart use of generative AI tools, particularly when applied to existing problems. And from a storytelling perspective, care should be to avoid plausibly generic content, and remain true to the human-centred aspect of philanthropy that is so powerful.

Jonathan Waddingham is the Managing Director of Learning at Lightful, a B Corp


Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *