Nathan Chappell: Generosity, and the Case for Responsible AI
Changemaker Profile 003
Artificial intelligence is increasingly positioned as a solution to many of the nonprofit sector’s most pressing challenges: declining generosity, constrained resources, and rising expectations from donors. Yet, as adoption accelerates, far fewer leaders are pausing to ask what kind of sector this technology is shaping. For Nathan Chappell, Chief AI Officer at Virtuous, that question sits at the heart of his work.
Across more than two decades in fundraising, technology, and social impact, Nathan has developed a distinctive position: AI is neither inherently good nor bad, but a powerful amplifier of human intent. Used thoughtfully, it can deepen trust, strengthen relationships, and scale generosity. Used carelessly, it risks accelerating the very disconnection the sector is trying to address.
After an inspiring chat with Nathan at an event to launch his newest book at the University of York last year, I asked Nathan if he could be convinced to feature on a new start up podcast which was emerging as an early idea from my human brain.
I think this first episode is a brilliant opening to our Purpose Brief journey and sets the scene for you to listen in, grab some thinking time and think about AI could unlock potential for more good in the charitable sector.
Here's our briefing for you on our conversation. The Podcast is available now to stream across all the platforms.
Social Good as a Human Constant
A defining feature of Nathan's thinking is his insistence that generosity is not fragile, outdated, or in decline by nature. Instead, he frames social good as something deeply human which is a biological and linked to social instinct. This persists even as systems around it change.
As he explains in conversation on the first episode of the Purpose Brief Podcast:
“Social good for me is a state of mind, not something that you do. It’s a way of being. Humans are biologically wired to give back — the noise and distraction in society are what pull us away from that.”
This framing matters. It shifts the problem away from individual motivation and towards institutional design. If generosity is intrinsic in us as humans, then the role of organisations (and the technologies they adopt) should be to remove friction, not introduce it. AI, in this context, becomes a tool for reconnection rather than extraction. But technology can often, if implemented badly, send organisations out-of-kilter.
Nathan has authored a number of fascinating books and we pick two which we encourage you to investigate in more detail:
The Generosity Crisis
In The Generosity Crisis, Nathan develops this argument in detail. He challenges the assumption that declining giving reflects donor apathy or economic scarcity. Instead, he points to transactional fundraising models, impersonal systems, and over-automation that weaken trust and belonging over time.
The book reframes generosity as a relational act constrained by organisational behaviour rather than donor willingness. Technology, he argues, has too often been deployed to ask more frequently instead of listening more carefully. This diagnosis lays the groundwork for his later work on AI: without intentional leadership, new tools risk scaling the wrong behaviours faster.
Nonprofit AI
Chappell’s most recent book, Nonprofit AI, brings this critique firmly into the present. Written with the assistance of AI tools themselves, the book is both a practical guide and a moral intervention. Its central claim is that nonprofit organisations cannot evaluate AI using the same frameworks as the private sector, because they operate in the currency of trust rather than profit. It gives insight in ethical considerations and makes an excellent introduction to shape your thinking.
A key distinction Nathan introduces is the differentiation between ethical AI and beneficial AI. Ethical AI meets minimum expectations around bias, privacy, and transparency. Beneficial AI goes further, requiring leaders to consider long-term societal impacts, unintended consequences, and whether technology ultimately strengthens or weakens community over time. This places responsibility squarely at board and executive level, rather than delegating it to technical teams.
The Future of Fundraising
One of the most consequential implications of this area of work is how it reframes the future role of fundraisers. AI will undoubtedly automate tasks, accelerate research, and improve efficiency. But Nathan draws a clear ethical and practical boundary around what should remain human.
As Nathan mentions in the podcast:
“A bot can give you information, but it can’t make you feel heard. Optimising an analog person with lots of digital bots around them is amazing — but human to human connection is still what matters.”
This insight reframes AI not as a replacement for fundraising professionals, but as an amplifier of judgement, empathy, and connection for the fundraisers on the front line and create new routes for donors who wish to engage in different ways. The fundraisers who thrive will not be those with the narrowest technical expertise, but those who can connect ideas, exercise discernment, and build trust. High performing teams will be supported by intelligent systems rather than overshadowed by them. I hear this and I know!
We talked about the role of prospect researchers and how AI could potentially make them the most influential member of the team. As someone who was one of the first prospect researchers in at a university in the UK, I am really excited about how AI and a human expert could partner to ensure major gift and alumni teams are focused on portfolio and unlocked from data churn. This has potential to change the game for small fundraising teams across sectors.
Leaders are key to adoption
Throughout his work, Nathan is clear that AI adoption is not primarily a technical challenge. It is a leadership one. While tools are becoming cheaper and more accessible, particularly for small, resource-constrained organisations, it is often governance, clarity of values, and strategic intent that lag behind
He warns against ungoverned “shadow AI” use, while also cautioning that incremental change is insufficient in an exponential world. Leaders must engage directly with AI, not to chase novelty, but to shape how it aligns with mission, culture, and trust both within an organisation and also in our communities.
The Resolute Purpose View
At Resolute Purpose, we see Nathan Chappell’s work as emblematic of the challenge now facing the social impact sector. AI will continue to accelerate but the question is not whether organisations adopt it, but whether they do so with intention, restraint, and responsibility. It's out there and it creates huge opportunity as well as significant headaches.
Nathan's contribution is not technological evangelism, but moral clarity. His work reminds us that generosity cannot be automated it can only be augmented. It's a real opportunity to get things right. But the is a warning here: don't break it. You could lose trust.
Responsible AI, in this sense, is not about slowing progress, but about ensuring that progress remains human and there's a short window of opportunity for organisations of all sizes to try and land this in a highly competitive and chatty communication space.
Find out more about Nathan's work with the following links.