Tuesday, March 24, 2009

So You Thought You Made an Impact

A great deal of conflict surrounds what constitutes an evaluation. As posted above, there are some folks in the evaluation world that do not believe an evaluation is conducted unless there has been some assessment of value. Value is often transcribed by stakeholders to mean "impact". So, what is this "impact" and how does one assess it. To begin, I think we need to look at the reasons for the program's creation - in other words, why did the program get started in the first place? Most programs, much like small businesses are created based off the idea of someone who thinks there is a "market" for their product. Depending upon how invested the individual is with the idea and/or resources available to that person, some research goes into how to best conduct the program and whether the market is really large enough to support the new endeavor. Be that as is may, the "impact" of the program is considered at some level. It is here that the value of the program is at least considered by the initiator.

So, why do I mention impact? A quick look at the Dow Jones Industrial Average or for that matter, just about any other index and you can get a sense that people are a bit nervous about money. Funders, be they government, private individuals, or foundations are becoming a bit more selective as to who and what they are funding. They are no longer content with the vague statements of "people learned something", "the participants enjoyed the experience", etc. Rather, they are starting to ask the question - "did the money really result in change or at least forestall things getting worse?" It is within this context that nonprofits now promote their work.

However, there is more to the story...

While I work for a funder and answer to a Board of Directors that is concerned that their investments have an impact - there is also a notion that good business sense would indicate that any viable organization has a strategic plan and their activities support that plan. Imbedded in the concept of the strategic plan are goals and statements of impact. As such, any organization that implements something new, or is maintaining some aspect of their programming should have linkages between their actions and the global impacts their organization is trying to effect.

And so we get to the relationship of value and evaluation. In my case, I do not think it is always useful and/or appropriate to attempt to measure impact as part of an evaluation - but I do believe that evaluation should recognize that all programming should have some sort of impact. If a program is in its early stages, the data might not be there yet to see change, but it might be there for a pre-test. Further, since we are on the topic of value, the impact of the program might not be the focus of the major stakeholders at the time - but it should be acknowledged if only to inform the process evaluation (description of the context of the program and rationale).

But, before I push further and make the argument that evaluation doesn't exist if there isn't acknowledgement of planned impact or value, I need to clarify a specific distinction. Namely, there is a difference between the methodology of an evaluation and the purpose of an evaluation. Certainly, they are linked - but often times I hear the statement - "we can't consider impact of a program because it won't really happen until well past the funding ends". This statement is not focused on the methodology of the evaluation, not the purpose. It is an excuse statement that attempts to take any consideration of impact off the table and focus everyone on only the actions of the program. Again, it is important to assess the process of a program, but without understanding the end (impact), a significant portion of the picture, that should inform the process evaluation is lost. We can speak to the number of hours of training we give to septagarians and their experience with a program, but if we are attempting to change attitudes towards violence, unless their voice is really respected in their communities, odds are we are barking up the wrong tree. Now, hopefully, the training is actually deployed to individuals of all ages and the septagarians are just part of the mix, but if the methodology of the evaluation is not informed by the purpose of the program (and its goal impact), the training evaluation focus might change from sampling the elderly to sampling younger individuals. Further, in the case of social marketing, what speaks to the elderly might not reach the 20 somethings. And so, the data we gather in the methodology of the evaluation that may indicate that the 70 somethings are really enjoying and learning from the training, might not be indicative how the rest of the "target audience" feels or learns.

So, I pose the argument that value and consideration of impact is important - whether it is really measured or not.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper
The Evaluation Evangelist

No comments:

Post a Comment