ChatGPT and fundraising – what do you need to know? (part two)
In part one of this series, SOFII asked three UK-based fundraisers some key questions about ChatGPT (Chat Generative Pre-trained Transformer) and fundraising. Now we head across the pond for a chat with another fundraiser who has spent lots of time working with ChatGPT and other application programming interface (API) tools – Cherian Koshy.
- Written by
- Cherian Koshy
- March 16, 2023
Editor’s note: Cherian Koshy’s career in fundraising has seen him work as an in-house fundraiser at a variety of non-profits and as a consultant. He’s also a self-described ‘biased contributor’ to this piece, but in a good way! Cherian is the proud founder of a platform that democratises access to artificial intelligence for nonprofit organisations.
SOFII knows Cherian has great experience in digital fundraising. So, we directed a few questions (and concerns) to him regarding ChatGPT and fundraising.
Q1: Some of us are hearing about ChatGPT for the first time, but is it really that new?
Cherian Koshy (CK): Prior to the debut of ChatGPT, I had already been working with its creator’s (OpenAI) other API tools. In fact, there have been various models available from OpenAI and the ChatGPT version we’re talking about today is an evolution of Davinci-003 (D003).
This latest D003 model is such a significant improvement over the previous version, 002. And while we wait for the next incarnation, 004 (due sometime this year) ChatGPT has the virtue of being more easily accessible to the public. And it’s free!
It is potentially the least fascinating of the models, primarily because this one can’t currently be altered – it’s simply an ask/answer format. You ask it a question, and it gives you an answer.
Q2: What do fundraisers need to know before we attempt to use ChatGPT?
CK: In my opinion, the biggest learnings from ChatGPT are two-fold. First, it is important to recognise that these models are useful for a narrow set of tasks. Many people have criticised how foolish the model was when it couldn’t solve logic problems or riddles. Others identified that the writing was wordy, predictable, or emotionless.
In some sense, this is like complaining that our robot vacuum did not wash the dishes. Do we complain when our desired result is on the second page of Google or when Amazon can’t find the exact item we saw in the shop? Perhaps, but not justifiably so.
The truth is that the ChatGPT provided a reasonably correct answer to the question that was posed. So, the first learning is that we must temper our expectations with context and potential uses.
The second learning is how important prompt engineering is to the proper output. ChatGPT has encoded huge volumes of data and how those data points (words in particular) relate to one another is based on the frequency of their previous uses. For example, after reviewing books and websites, it rightly concludes that fundraising and gifts in wills would be much more related than fundraising and chicken tikka masala.
However, as with any computer programme, it’s prone to error. More than a decade ago, IBM’s tool, Watson, incorrectly answered ‘Toronto’ to the final question on television quiz show Jeopardy. The question was: ‘Which city is home to airports named for a World War II hero and a famous World War II battle?’
Watson went on to beat Jeopardy super champions, Ken Jennings and Brad Rutter. While ChatGPT might fare better than Watson’s 71 per cent correct answers – it’s clear that the semantic structure of the question can make a significant difference in the answer.
I suppose it’s a bit of a twist on the axiom, ‘garbage in, garbage out’. Because this tool requires us to be relatively specific about what we ask of it. In return, ChatGPT aims to be as precise as a computer can be in its answer. And, as with all things, we need to validate, assess, and edit as appropriate.
Q3: Does ChatGPT pose a risk to the charity sector?
CK: A lot of sector buzz around ChatGPT relates to how the misuse of this technology can be harmful and how this could harm jobs in the sector. But in relation to that concern, it’s important to remember that artificial intelligence (AI) pre-existed ChatGPT by years. In fact, OpenAI’s models are neither the largest nor the best. They simply have access to lots of money from Microsoft to make it easily accessible.
But that, in a nutshell, is the most exciting aspect. AI and machine learning were once limited to the most resource-heavy, forward-thinking organisations. Today, anyone with a browser can leverage it for generating ideas, creating content, analysing data, and we’ve just scratched the surface of what can be done.
In the next few months and years, we’ll see huge advances in that what used to take hours or days to complete will now take seconds or minutes instead. We live in an age where things that cost huge sums of money, like website creation or data analytics, can be done as effectively but at a much lower cost.
ChatGPT will be built into everyday tools shortly. I’m most excited about seeing how every organisation might use these tools to save time and resources so fundraisers can shut down their computers and spend more time with friends and family!
Without a doubt, there are always risks to quite literally any technology. Nearly all those risks are mitigated by having humans in the loop. Where human intellect, reason, lived experience, and judgement are absent, negative outcomes abound. Where humans need to be particularly on alert is the use of any technology that includes or excludes.
For example, artificial intelligence is already being used by financial institutions to determine who has access to capital such as credit cards, loans, and even life insurance. The stakes of a wrong answer – if it creates philanthropic exclusion – can be significant.
Similarly, humans should be on alert where personally identifiable information (PII) is used for classification or sorting. Machines, like humans, are susceptible to bias and can easily lack context and an understanding of history. Predictive models can’t independently correct for systemic racism or previous policies that adversely impacted specific groups of people. This could thereby render results such as ‘capacity to give’ or’ likelihood to give’ as inaccurate. That’s not to suggest that AI modelling of donors is completely wrong or useless – but we should be aware of its limitations.
Q4: Should fundraisers be worried about this technology? Or are some organisations successfully using it already?
CK: The use of AI in fundraising is not new, and it is being more widely applied each day.
Back in 2016, the Chronicle of Philanthropy’s front page featured Amy Lampi who, at the time, was working with the Houston Alley Theatre. She was part of an article describing several examples of predictive analytics being used to support fundraising.
Late last year, Furniture Bank, a Toronto-based charity that collects used household items for people in need, switched to artificial-intelligence-generated images in its 2022 holiday campaign. Doing so enabled the organisation to protect the identities of those they served without subjecting them to objectifying or dehumanising photos.
And dozens of other organisations are using similar tools to create appeals in English when it is not the fundraiser’s native language. Hundreds more are using it for ideas and first drafts of content, even using it for brainstorming and strategy building. In the next few months, we’ll see thousands more use it to clean up data and use data more effectively. When it’s built into Microsoft Office and Google in short order, we’ll still debate it like we do about calculators on maths exams, but with much less controversy.
Even as its uptake increases, artificial intelligence is not coming for your job. It’s come to make your job a little bit easier and a lot faster. My robot vacuum cleaner doesn’t pick up every bit of dust, but I use it every day, so I have less to clean up after it. Likewise, there are industries like computer programming where there is an exact way to code something. That industry will change dramatically in the span of a year.
Fundraising won’t change that fast because it depends on relationships among people. Yet if AI can help you finish work one hour faster or have a greater impact for your cause, I think it will be worth it.
Editor’s note: If you missed part one of this thought-provoking series on ChatGPT, you can catch up and discover what Emily Casson, Deniz Hassan and Matt Smith had to say, here.