Thursday, May 28, 2009

Impact and Evaluation

Perhaps I'm feeling saucy today, but there were two different concepts that I wanted to address and I didn't think that both fit in one blog post. So, dear reader, you are subjected to a Two for Thursday...

So, in the previous post, I mentioned that there is a lot more to be learned from evaluation than just the value of a program. I'm going to change gears a bit here and remind you that I also believe that assessing the impact of a program is very important. It strikes me as a bit reckless to deploy/fund a program/intervention and not observe whether the "thing" done had a positive impact. Yes, I have heard the complaint that it takes quite some time to get to the impact and some change is so incremental as to be difficult to measure, but I find those arguments to be spurious. Yes, some impacts do take time and some interventions are a "drop in the bucket", but there are always shorter term changes that theoretically (yes, back to program theory) are linked to change. Think of them as the stepping stones that lead to the great change. By choosing not to look at some sort of outcome, the evaluator and more importantly, the individuals funding and deploying the program or intervention are being irresponsible.

I'll be writing more on this topic in the next few days and weeks as I struggle with the various arguments that have been made against evaluation of impacts. As always, I welcome comments and questions - especially if you are of the ilk that finds evaluation of impact to be impractical, impossible, and whatever other "im" you can come up with.

Best regards,

Charles Gasper

Evaluation & Strategic Learning

In a couple of weeks, I will be presenting to a group of health focused funders on the benefits of evaluation and research. Specifically, I'm talking about how our Foundation takes the information we gather from evaluation and how we use it to inform our future grantmaking. If you are interested - here is the website for the event - http://www.gih.org/calendar_url2665/calendar_url_show.htm?doc_id=706939. The title of my presentation is Strategic Learning: Integrating Information and Assessing Your Foundation's Impact and while the presentation will focus on the various avenues we are exploring for impact assessment, there are other important bits of information we have gleaned that have dramatically impacted our grantmaking.

Most people think of evaluation in the framework of value. I mentioned that in an earlier post - if you haven't read it, I find it riveting. [What? I can't be sarcastic in my posts?] But, the statement in this case really does hold. Travel with an evaluator some time on a site visit. A significant portion of the first meeting with a team you are "evaluating" is spent on explaining that "this isn't an audit" and that "we aren't here to play "gotcha"". Like it or not, the term evaluation has become synonymous with the favorite activities of the IRS. [Note to anyone affiliated with the IRS, I really like you guys, you do good work, I'm just talking about your reputation, not my own personal feelings. No need to audit me.] Members of the evaluation community actually have discussed using a different word when bringing up the topic of evaluation and there are a few folks who do. Because of this, the general public doesn't quite understand that there are many more benefits to evaluation other than discovering whether a program or process actually worked. The process of evaluation allows those interested to learn a whole lot more about what is being evaluated than just that. For example...

There is some literature out in the world that suggests that having a network of resources and using it is a strong indicator of the probability of an organization and its programs being sustained for a longer period of time. Curiously, it appears that the network is actually more important than funding. I think this is more of a notion that the two are somewhat interdependent - having a larger network probably is predictive of more opportunities to find funding. But, I didn't start writing this example just to explore that argument further with you - we are actually doing some research with our past grantees to see what bubbles to the top for prediction of sustainability. But, for now, it seems that networks are very important. The reason I can say that is that some of our evaluation work has looked at sustainability of programs (we are a "seed" funder, funding new or expansions of current programs, not a sustaining funder). We want the programs we fund to be sustained past our funding cycle and have oriented some of our evaluation work to look at how this works. Well, some of the interesting information was that networks are important (did you notice that I keep driving this point home), but depending on whether you are a rural or urban nonprofit, the relative importance changes - other things bubble up to the top. It is this discovery that led us to the research on prediction of sustainability. The eventual idea is that our program staff will have a checklist to support their review of grant applications, allowing them to assess the probability of the organization and their programs being sustained after funding. How we use the tool may vary, however the initial intent is to identify issues with a possible grantee from the very beginning and to start providing them with support to help them grow in those areas of weakness.

The concept of networks also resides in some other evaluation work we are conducting. One of our initiatives focuses on funding "non-traditional" programs - basically funding organizations that normally would not be working with us. A large percentage of these organizations are faith based and/or are small community organizations. Given that we recognize that resource networks are important, we wanted to know how these organizations work. Well, what we have discovered is that these small community organizations excel at developing resource networks. What seems to be a difficult concept for our more traditional grantees to work with, these small community organizations live and breathe based upon their resource network. Teasing out how their networks develop and why is a huge "learning" that we can share with our traditionally funded grantees. The information we are gathering has nothing to do with whether their program is "successful", but rather is giving us information as to how they do their business and are successful in areas that seem to be a struggle for others.

The point I'm making here is that we have learned some impact unrelated information that will and is changing how we conduct grantmaking. This and other nuggets of information have changed what support we provide and how we provide it. Mind you, we are interested in impact and I'll address my thoughts on that in another post, but I thought it a good time to remind all of us that there is more to evaluation and strategic learning other than whether our funding "did good work".

As always, I'm open to your comments.

Best regards,

Charles Gasper

Friday, May 8, 2009

Things!

What's a thing? Merriam-Webster's definition can be found here - http://www.merriam-webster.com/dictionary/thing. But to someone who has learned to write clearly, to use the word "thing" is to cheat - to say something when you can't really say anything. In high school and college, professors drove into my brain that the use of the word is unacceptable. To this day, I find myself still using the word when I can't think of another word that should be used. As an example, I was recently asked, "what is an activity and output?" "Well," I answered, "they are things that..." Now, I continued with my explanation to further state what activities and outputs are, but I used the dreaded word.

Now, why am I talking about the word thing? It is at most a lazy use of English to most people, but to an evaluator, it is much more insidious. In the past few weeks, I've had opportunity to review applications for funding, have had discussions with applicants, and have met with a few of our grantees. In many cases, when we discussed their program, there was sort of a placeholder - a thing that was not fleshed out in the development stage and now was sticking out like a sore thumb. In some cases, it was an assumption that something would happen. In other cases, it was just something that wasn't considered. In every case, it was a barrier to the success or effectiveness of the program.

As you know, I'm a bit of a fan of program theory focused evaluation and the construction of theory based logic models of programs or projects. The process forces the designer to think about the things in their models and at minimum, identify them. A well used model will provide the designer with an opportunity to think more about the thing and objectify it (make it real versus an abstract idea). Finally, once in place, it helps the designer with the next critical question - how do I measure my program?

This is a very common question that I hear from the nonprofits I work with. How do they measure their processes, their outputs, their outcomes. The first question I always ask is - "what are they?" The answer I often get is that they had this idea with a bunch of things. And so, I often ask them to draft out their model. Once the model is started and they start defining the things and what they are connected to, often the issue resolves itself and it becomes a question of not "how do I measure my program", but rather, what are the techniques and tools I can to measure this specific activity, output, or outcome? The underlying question changes from "what really is my program?" to "what are the most efficient and effective ways to measure the parts of my program?" It is that point that evaluation starts to make sense to the designer and the information that is needed to support the program is clarified.

So - my advice to nonprofits who are designing and/or are running programs and now have a need for evaluation is to engage in a form of program theory modeling as you develop the program to ensure that you really understand what it is you are trying to accomplish. For evaluators, the model will prove invaluable in shaping the evaluation work in which you engage.

As always, I'm open to comments - please feel free to share your own thoughts on this.

Best regards,

Charles Gasper