AI for Grantmakers, Andy Burnham and Charity Meltdown Updates
Plus monitoring and evaluation for the Royal Family. All in our Barely Civil Society...
This week, a lighter tone. We ask how grantmakers should be using AI, get some more sh*tshow updates (not so light), consider Andy Burnham’s turn at Locality, and have a giggle about whether we should have a stronger monitoring and evaluation regime for the King.
General Updates
This week I’m absolutely up against it with two solid days of workshops for a grantmaker, and then, on the other side, helping a delivery charity in Cumbria scale its amazing work on mental health. All good but I’ll be knackered by Xmas.
Charity Sh*tshow updates
I feel really protective and worried for people I know in delivery charities (clients and friends of many years). I know at least two major longstanding community anchor organisations are looking at cutting 25% to see if they can still deliver anything. As I’ve said before, a 25% drop can be curtains. Community anchor organisations are what I most fear we will lose, especially as most grantmakers are about to start targeting ruthlessly.
I really hope we’ve all overestimated the scale of the crisis, and we will just skirt over it like we so often do. Remember what Paul Street said about cockroaches.
Jenna Willis of Finding Starfish has written an open letter to funders. Unfortunately, a lot of the funders don’t do Linkedin, because if they do, they become a bit too popular. (I had a funny conversation with a grantmaker CEO a few years ago, who was telling me proudly that he had a lot of follows on Linkedin, which showed that he was a thought-leader. I said ‘Or someone with a lot of money’, and sadly that was then a very tense meeting, because I might have overestimated his level of self-awareness and indeed, sense of humour. Anyway, many other funders don’t do it because they feel rather on show, and they get mobbed.)
BUT! Quite a few grantmakers follow this blog now (we’re at about 30:70 on the FFD [funder-funded-dyad]), so here is Jenna’s really important article/ letter to make sure they see it. Click to see the link.
As you can see, she puts the power dynamic front and centre, with the fact that it’s tough to say these things. She gives a good list of things funders can do. But it’s all the stuff we need to do any time.
Sector Consultation: You decide
I’ve decided that ‘shitshow updates’ is too coarse, and so from now on, I’m going to say poo flurry. However I’m aware of power dynamic here, so I’m going to be consultative and you let me know which you prefer.
You can also message me with with any alternatives (there was no open text option). I will announce the winner next week.
Andy Burnham, we love you
Meanwhile the Locality conference took place. I can’t afford to go to now, but then my organisation could barely afford it when I was a CEO, and I ended up paying for it myself from my dismal wages most of the time.
It was always a highlight of my year, though. I loved seeing all the dead eyes of small charity CEOs come slowly back to life over the course of two days, just before they went back to their organisations to have their hopes and dreams crushed down a blocked community centre toilet again. A very special experience.
And Andy Burnham, by all acounts, stirred up the crowd.1 The best Labour leader we never had, and the man who made the NHS ladies weak at the knees when he came to visit when I was at Southwark PCT all those years ago (‘his eyes,’ many swooned). He gets it:
"The voluntary sector needs core funding - not project funding but core, multi year funding."
Andy Burnham - Locality Conference Keynote 2024
And there was more. Stop wasting money on chasing welfare ‘scroungers’. Less time spent applying for funds. Devolution and control for councils who can choose how they spend, including, of course, on the VCS. (In theory anyway.) All of us can get behind this, and not just because Andy is a certified dreamboat. I still hope we find him back in Parliament, and perhap number 10 at some point. I’m told he was asked when he would be prime minister and he didn’t rule it out.
But it’s notable here that Burnham actually talked money. Maybe it’s easy to do that when you’re nowhere near power at a national level - but the ideas here about how to make some fundamental shifts about what we value and where and how we spend money in civil society is the real deal.
Thought for the week: How Should Grantmakers Use AI?
I know, you’re absolutely sick of bloody AI. Everybody is trying to sell you something. That includes in the VCS (some of the AI stuff is really bad). But it really is worth looking at in terms of the FFD (funder-funded dyad) for both grantees and grantmakers.
One or two of you may know that I am fascinated by AI and all things technology, and an even fewer (so like 0.3 of one of you) may also know I am civil-partnered to an AI and machine learning expert of 25 years’ standing (and 29 years of partnership).
So I know much more about computing than you would think, and he knows much more about charities than he would want. The latter is true also for me.
If it gets easier to apply - which we’ve been saying we wanted - that is going to increase the number of applications. It may even increase their quality and uniformity (even used well, the quality of writing may go up for those whose skills would previously have excluded them. A good thing surely.)
So aren’t we going to have to start using AI to read, as well as write, grant applications?
Erm, maybe
I’ve been training people to use AI properly as a sort of sidebar to my main work over the last year. Very much not my day job, but because I’ve always been into technology, and always disappointed by the resistance to the use of technology in the third sector. My goal here has been twofold: to reduce the vast cost of people fundraising (£900m a year) which takes money away from real work, and to avoid funders being overwhelmed with terrible applications that tell them nothing - or even worse, tell them lies. I use AI myself as an assistant on a huge amount of my day to day work - it doesn’t do the work for me, but it edits at five times the speed a highly competent intern I can’t afford would.
Meanwhile with my grantmaking hat, I know the problem is that already grants assessors are seeing more and more generic bullshit applications, and it’s likely that this has had an effect on the application-ageddon I talked about a couple of weeks ago. The guys behind the great Plinth system say they have seen the same.
People who do use AI badly are doing themselves no favours. At the same time, funders who are talking about banning it are not only not making much sense - what good will that do? Who does it help? - but also assuming that the ability to detect it will continue. At present it’s still possible to catch with tools, although avoider-tools are now plenty. It’s also no longer as easy as it once was to detect with the naked eye as the models get better and better.2
Wasn’t this what we wanted?
But at the same time, we always said on both sides of the FFD that we wanted grant applications to be easy and accessible to all. Proper use of AI might make that more possible, but in turn, that may also ramp up submissions and increase ‘spam’. This is what has happened with with SEO over the years - and indeed, happens with pretty much all technologies.
Grantmaker side
The next step is bound to be that anyone who has to read grants, tenders, applications, starts using AI to read them. I think that may be starting first of all in the corporate world with major tenders. But ACF were reporting in Civil Society this week (hurrah! No paywall!!) that grantmakers are starting to explore AI’s use on their side. Should they?
It seems inevitable. After all, recruitment is another place where AI has made traditional methods meaningless - when I was recently recruiting, the veritable tsunami of AI dreck overwhelmed me, and in the end the candidates we interviewed were the one who hadn’t used AI (obviously not just for that reason) - partly because they were the only ones whose applications made any sense. Glib hyperbole looks impressive on the surface if you don’t know any better. To me, I just felt like I was reading SEO marketing. (And indeed, I sort of was.)
But on the other side, recruiters in the US, for example, have been using AI for years to screen applicants in a world of one-click submissions. Sadly this has led to horrific biases - AI is often a closed box where the decisions it makes are hard to fathom. How do we know who is it is screening out and why? And which of our own prejudices has it picked up and amplified, perhaps before even we knew we had them?
What has disappeared for recruitment is called the ‘effort premium’ or ‘commitment filter’ - if it’s no effort to apply, you aren’t filtering out the people who are just chancing it. This is also the danger in grant applications.
Should AI be used to screen applications?
No wonder, then, that funders are starting to ask themselves whether this AI screening is what they should be doing. Some on both sides of the funding table are expressing horror at that. Grantmakers worry that it will destroy their sense of any real link to applicants and projects (also just about the only interesting bit of their job). And applicants may rightly worry that bias, or just plain bad luck, will make it even harder for them to cut through.
So yes, Al could make this extremely unfair - but many people feel it’s already unfair. But either way, would it be any MORE unfair? And would the other alternatives be better or worse? Would AI screening by funders be preferable, and possibly more equitable, than, for example, becoming invitation only? This is another approach being mooted during the application meltdown. The problem with that being that you then only invite people who have met your Chair at a black tie dinner, or have a founder CEO who used to work for the Cabinet Office - or have a website with steaming-hot SEO.
Meanwhile, getting bots to talk to bots is not going to help with the real relational grantmaking some say we need; but of course using AI to do some of the grunt work - gruntmaking? - might make more time for that. There is always the possibility that it takes out a load of the pointless work we do on both sides, which is created by the high level competition at the early application stage.
Hard decisions
There’s another danger - that we use AI to make the decisions. There is the possibility that grantmakers who already struggle with responsibility for difficult decisions use it to further distance themselves from the human lives our work impacts - or to obscure how those decisions are made. This is a general issue across our society - everything from targeting missiles to declining credit, is something that there will be a human need to hand over to AI. Why? Because most of us don’t like making decisions that will make life difficult - or indeed, end it! - for others. (Those of us who do are just plain sociopaths - the noise in the system of humanity.) I’m certainly not comparing grantmaking to targteing missiles here, but let’s be clear - the decision could mean many people lose their jobs, and that people in desperate need of care don’t get it. It’s emotionally taxing, and a heavy weight to carry.
I’ve talked before about that weight of responsibility I see with grantmaker clients, and it is similar in other areas -for example, social work, the NHS. Often we create baroque processes to make decisions, and sometimes because it feels somehow cleaner, even farier. Because, that way, it feels like the process itself makes the decision. It doesn’t. We are still ultimately responsible, even if it might feel like we don’t have to own it.
And remember: we need our humanity to make good decisions. That feeling of discomfort, of sadness when we do something that has a negative impact, is what makes humans make better decisions - not worse. And technology will make decisions without the humanity that, as both grantmakers and deliverers, is our greatest value-add.3
Here comes the science
The problem with humans and technology is that we’re very good at pretending that it will solve our dilemmas because it is more hygienic, more objective, cleaner somehow. This is the case at all levels of technologisation. As my homeboy Herbert Marcuse put it:
“The quantification of nature, which led to its explication in terms of mathematical structures, separated reality from all inherent ends and, consequently, separated the true from the good, science from ethics. No matter how science may now define the objectivity of nature and the interrelations among its parts, it cannot scientifically conceive it in terms of "final causes." And no matter how constitutive may be the role of the subject as point of observation, measurement, and calculation, this subject cannot play its scientific role as ethical or aesthetic or political agent.”
As you can see, Herbert was a master of zippy prose. In other words, science and technology can’t choose the political the goal, the ethics, the agency. You choose what you use technology on, and you make the decisions as to the final cause, ideology, or goals, that you set that technology on.
The danger is that you might not even know what that ideology, or those goals are. AI, and indeed, all sorts of processes, are very good at finding patterns which are deeply ideological and deeply unfair, and obscuring them so we can’t even question them.4 AI could be making grant decisions with its lack of humanity and a completely uncritical approach to social justice and ideology. I’m sorry to say that the latter at least is already common in some segments of the funding sector.
In other words, it ain’t the way that you do it, it’s what you do. Fun Boy Three and Bananarama were completely wrong. (Awesome song though.)
Is it worth a try?
Overall, I find myself wondering how best to harness the power of technology to do what we have said we wanted all along - to make it easier to apply. We now need to think about how to make it easier to assess. Otherwise, a completely asymmetrical system will have to shut down.
As you can see, the horrors we can imagine are always infinite. There is no doubt AI holds countless dangers. But the point is that at every stage we have a choice. People on both sides can choose to use AI properly and ethically. That goes for our wider society as well as our work in the VCS. That is why we need clear agreements now on how we are going to use AI as grantmakers and grant seekers. We also need to train people to use it properly.
All of this relies on honesty and an understanding that nobody is trying to cheat the system - rather, we want a system that is sustainable and gets the best results. What might that be? One thing I think we all agree is that the traditional approach to application and assessment is not working very well.
I’ve written more on AI and charity tech here.
And finally: Monitoring and Evaluating the Sovereign Grant
As an amusing aside, there has been a good deal of publicity about how King Charles and Prince William make their money, much of it from essentially rentseeking on the public purse. I’m slightly taken aback that anyone has been surprised by this, given that it’s, you know, pretty much definitionally what a monarchy does (it’s called feudalism, people!). But anyway, it certainly isn’t a good look. Of course they get some of their money, aside from ripping off their subjects and charities, from something called the Sovereign Grant. Now if I was RAG rating this one for a grants committee, it’d be red as a Baboon’s nethers.
Don’t get me wrong, I’m not a particularly strong anti-monarchist, despite my obvious socialist proclivities. My very working class gran had her hair done like t‘Queen, just like most women of her generation (the bubble perm), and her Maj and the Pope were on every wall in her flat. I used to think of her every time I saw the Queen on TV.
As for me, I just think the Royal family should be funded by the National Lottery Heritage Fund, and put out to tender every 10 years. I think their overheads need to be scrutinised. I want to see a proper monitoring report every 3 months, and an external evaluation at the end with a full analysis of social ROI, perhaps a randomised control trial compared with a nation with no monarchy (realistic and proportional for the size of the grant). Clearly, there should also be a grant agreement that makes it clear that the funder should not be brought into disrepute. Unannounced monitoring visits are part of the deal. Also, I think there is a strong argument to be made, in this case, for restricted funding. I should note, however, that there is a website where you can read all about the King’s finances, apart from all the bits you can’t.
It also made me think that about the questions over the years about whether royal patrons do charities any good at all - indeed a report last year by Giving Evidence found no evidence at all of a positive impact, on revenue at least. I can’t help feeling it must be diminishing returns no matter what your own personal take these days. With all that said, it’s worth us remembering the legacy of Diana - an outlier and outsider from the Firm, and a tireless advocate for AIDS support, including the ‘undeserving’ just at a time when others (including Princess Anne) went the other way. The ability for such Royal figures to make arguments about causes is probably useful. Although in some current cases it does feel more about Netflix.
Anyway, I don’t think homelessness or poverty and wealth distribution are going to be a natural fit for your philanthropy chaps… If I was a philanthropy advisor, I’d be suggesting the environment or perhaps animals. Oh wait, what about the hunting… Hmmm.
Can I help?
You’ve got this far. I’m amazed, I was flagging by this stage, and honestly everything I post about AI turns people off.
But can I help? From research and evaluation to facilitation, strategy development, public speaking, for grantmakers, grantseekers and everyone inbetween. Drop me a line. Link to refreshed website below.
A step up from Michael bloody Heseltine the last time I went, also the day after Trump won the election.
(Note also that detector tools are having more cases of false positives, especially now that the langueg of AI is naturally breaking into our own writing… That’s how language works, folks!
I’ve seen counteless grantmaker clients decide that giving things numercial scores makes things fairer and cleaner. They are utterly resistant to the idea that a numerical score is simply a record of a judgment, not the judgment itself.
See racial bias research. https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future