I started writing this post about two days ago and discovered rather quickly that I was writing more than should fit in one Blog post – so… Instead I’m going to subject you to a series of posts discussing the rationalization of the wants/needs of the funder around evaluation with the wants/needs of small to medium sized nonprofits with whom I’m familiar. Tossed in will be some reflections on some reading I’ve been doing for school and of course, you get my own opinions and thoughts – full force!
To begin, I would suggest you take a look at my Blog post in January (2011 if you are reading this years from now – hello future people!). Go ahead – I’ll still be here when you get back…
So, you read my comments about my frustration with the lack of outcomes information coming to me from organizations soliciting me for donations. Well, those winds of change came quick and a partner in funding is looking for outcomes as is my own Board. My CEO, who gave me the name of the “Evaluation Evangelist” has pointed out to me a few times – “a prophet is rarely heard in his own land” and my previous warnings about nonprofits and foundations needing to attend to outcomes (versus outputs and other processes) was unheeded. And as with all crises, I think we are at the beginning of a change.
Before I go further, I should tell you that while I believe that we should always consider outcomes of programs, projects, advocacy, and whatnot – there is a time and place for evaluating said outcomes. This is tied to the questions the stakeholders have for the evaluation and what is possible to measure, given the theory of change of the program. Today, the “top” funding stakeholders are asking for outcomes and unfortunately, because of their attention, nonprofits are going need to react. Why do I say, “unfortunately”? - Because the interest in programmatic outcomes didn’t originate in the nonprofits delivering the program.
Granted, I have access to a small number of nonprofits, but in their study of nonprofits – Reed and Morariu (www.innonet.org) found that more than 75% of the nonprofits spent less than 5% of their budget evaluating their programs – 1 in 8 spent no money. Additionally, funders were characterized as the “highest priority audience for evaluation” and surpise – outcomes and impact evaluation were rated as the highest priority. So, my experience with nonprofits, while small, does seem to echo the broader population of nonprofits.
So, if this has been as it always has been – otherwise we wouldn’t have the results of the Innovation Network’s State of Evaluation 2010, why would I be concerned? Sure, I have been an advocate for evaluation use and just because I’ve been advocating for it (bigger names than mine have for a lot longer), that shouldn’t affect change. In fact, one could argue that I should be pleased – interest in evaluation is increasing in the funding community. Except, there is little education for the funding community around evaluation. There is little use by the funding community around evaluation. And the expectations that are coming out of the funding community are the equivalent of taking an older car that has never gone faster than 20 miles per hour and slamming on the accelerator to go 80 miles per hour (for those of use that use metric, you can substitute KPH and still see the analogy). Nonprofits that had at best conducted some pre-test/post-test analyses of knowledge change in participants in their program (more likely did a satisfaction survey and counted participants) are now being required to engage in significantly more sophisticated evaluations (ranging from interrupted series designs to random control trials). The level of knowledge required to conduct these types of studies with the implied rigor associated with them (I say implied if only because I can find a comparison group for anything – it just might not be appropriate) simply does not reside in most nonprofits. They haven’t been trained and they certainly don’t have the experience.
The funding community’s response is to offer and in some cases require an external contractor to support the evaluation. This could lead me to talk about difficulties in finding qualified evaluators, but we won’t talk about that in this post. It is an issue. However, what occurs with the involvement of an external evaluator? They do the work to support the funder’s objectives and after the funding for the project ends – they tend to leave too. There is also an issue around funding the evaluation at the level of rigor required – that too will come in another post. But, the message I want to leave you with here is that engagement of an external evaluator does little to increase the buy-in, much as less, capacity for the organization to engage in internal evaluation. The “firewall” preventing bias of an internal evaluator (e.g. organizational pressure to make the organization look good), while certainly improving the perception of the funder that the evaluation is more rigorous, does little to help the nonprofit other than to aid them in maintaining the current cash flow. [Incidentally, I’ll address the internal versus external evaluator conflict in a later post as well. I think this is something we can all use to explore.]
So – what am I advocating for? Let’s not take that older car on the highway just yet. Let’s listen a bit more closely to evaluation thought leaders like David Fetterman and consider what we can do to improve the capacity for organizations to do their own evaluations. Let’s show them how attending to outcomes might help them improve their organization and the services they provide to their participants. Perhaps we should think about evaluation from the standpoint of use and then apply the rigor that is reasonable and possible for the organization. Bringing in an external evaluator that is applying techniques and methods beyond the reach of the organization results in something mostly for the funder, not for the nonprofit. At best, the nonprofit does learn something about the one program, but beyond that – nothing. To them, it could almost be considered a research study versus an evaluation. Let’s partner with the nonprofits, get them up to the speed we want to get them, with careful consideration and deliberation versus just slamming on the accelerator.
As always, I look forward to any comments or questions you might have. Please feel free to post comments.
The Evaluation Evangelist