Thursday, July 2, 2009

Evaluation, "Best Practices", and Fears

Those of you following my blog and facebook updates/tweets know that I traveled out to Baltimore a few weeks ago to speak about how our Foundation uses the information we collect as part of evaluation to inform our strategic planning. Recently, I have been in the throws of data analysis of an evaluation we are doing of an initiative that funds organizations with whom our Foundation normally does not partner. These organizations are not traditionally associated with health care delivery, but were viewed by our Board as a possible link to populations normally not reached by our more traditional partners. The evaluation and analysis have gone well and I have some interesting things to share with our staff, Board, and the grantees we serve – but the experience has given me food for thought.

First and foremost, there is a common issue of the organizations collecting data, but not having the resources to aggregate in a meaningful manner. The solution is pretty simple and inexpensive relative to other interventions, for those organizations without it, buy them a spreadsheet program like Excel and/or some data management software like Access. Of course the other key part is to teach them how to use the software. It is very frustrating to an evaluator to discover that the data is all there, ready to be used, but can not be accessed by the people who need to see it, because they just are unable to take the next step and put it in some sort of system that renders it available for analysis.

However, this is not the issue I’m interested in exploring in this post. Rather, this issue is a symptom of another problem – namely the requirement of grantees, by funders, to collect data to report. In many cases, if the grantee were using the data to support their internal operations, they would already have mechanisms to analyze it – even if it is tabulation on a piece of paper. Instead, funders are asking for information that the grantee does not find as valuable to their organization and thus, they collect and collect and then have difficulty in figuring out what to do next.

Part of the underlying issue has to do with the evaluation community that I belong to. Yep, I’m calling us out on this one. If a had a dollar for every time I have heard an evaluation contractor or organization (funders included) try to “sell” a nonprofit on evaluation, the first words out of their mouth is often – “the data you collect will help you solicit funding.” Raise your hand if you have heard or said it. --Incidentally, my hand is raised too. I’m just as guilty.--

We have trained the nonprofit community to believe that evaluation is something that you add on to a program, not something that can be a useful part of a program. But worse, while many funders require an evaluation, they often do not fund it or provide additional capacity building for their grantees. I’m finding that few foundations actually provide workshops, much as less more in depth training for evaluation to their grantees.

But it gets worse – much worse - and from my recent experience, somewhat scary for the future of funding.

There is a growing trend for funders to focus on funding “Best Practices”. At the conference in Baltimore, I often heard that statement tossed around. The staff at my own organization specifically ask for “Best Practice” programming when soliciting proposals for certain initiatives and priority areas. But, here’s the dilemma. If no one is supporting evaluation, how are “Best Practices” identified?

Recently, one of our Board members shared with me a program that is very interesting and judging from the evaluative work they have done so far, very promising. The Board member came to me, looking to see if I knew anyone that funded evaluation. Our Foundation funds programs and the evaluations are in support of those programs. In this case, they were looking to establish the evidence that the program works well to promote it nationally. Beyond a smaller group of funders, you would be hard pressed to find someone who would fund an evaluation on the scale that they are requesting – and that would only be for the programming they are funding. So, the question to me was, “where do we find funding just for evaluation?” And the answer?... I had no clue. Well, I did and I’m forwarding it on to a colleague who will be nameless, as I want to surprise her with it. But, even in this case, her organization funds research, not evaluation, so it probably is a stretch.

The point I’m making is that funders and nonprofits rely upon evaluation, even if they aren’t aware that they are doing so. The “Best Practices” evolve from the results of evaluative work. But if the nonprofits are only doing them to appease the requirements of funding or to solicit more funding, then the quality and the focus of the evaluation often moves away from program improvement/analysis to tracking of operational goals (basic outputs) and a focus on outcomes that are selected either to demonstrate success (as opposed to test success) or may even just be determined by the funder who isn’t the expert in the programming compared to those implementing it.

So, back to the grantees and the programs I’ve been evaluating. First of all, given the trend of funding, such innovative funding might be squelched in the future as they programs do not have “Best Practice” status. Their programs aren’t as effective as they can be as they are not fully aware of their programming. And yes, I must admit, their lack of having the knowledge of using Excel or some other spreadsheet program was a blind spot for me. I’m moving to rectify it for my grantees, but it makes me wonder how many more nonprofits are suffering from the lack of skill and resources to take the hard work they have done in collecting data and making it more useful. How many organizations still tabulate on a piece of paper or at best, use their spreadsheet program for only that purpose? I’m afraid of the answer, as I am afraid of the focus on “Best Practices”.

I have an answer that I’m implementing within the sphere of influence I have – namely mostly with my own grantees.

1) Over a year ago, I stopped “selling” evaluation as a way to get more funding and focus the grantee’s (and my own Foundation’s) attention on program improvement.

2) I had started workshops for my grantees and other nonprofits in the region, to teach evaluation methods.

3) I require that all my grantees that receive funding, beyond basic infrastructure, conduct an internal evaluation.

4) My Foundation funds these evaluations along with measurement supporting additional questions we have of the initiative group as a whole.

But, my experience has also taught me that I need to provide more basic training and support for the grantees in the form of funding for software and training to use it.

So, what can you do? If you are a funder or work for a funder, you might want to take a look at what your policies are around evaluation and how you use it, much as less encourage your grantees to do with it. If you work for a nonprofit, I urge you to think about how you use the measures you collect and why you collect them. But, before you run off to do that (because I know this has inspired you to push away from your computer and get busy), I would greatly like to hear your thoughts and reflections.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

No comments:

Post a Comment