Tuesday, March 24, 2009

So You Thought You Made an Impact

A great deal of conflict surrounds what constitutes an evaluation. As posted above, there are some folks in the evaluation world that do not believe an evaluation is conducted unless there has been some assessment of value. Value is often transcribed by stakeholders to mean "impact". So, what is this "impact" and how does one assess it. To begin, I think we need to look at the reasons for the program's creation - in other words, why did the program get started in the first place? Most programs, much like small businesses are created based off the idea of someone who thinks there is a "market" for their product. Depending upon how invested the individual is with the idea and/or resources available to that person, some research goes into how to best conduct the program and whether the market is really large enough to support the new endeavor. Be that as is may, the "impact" of the program is considered at some level. It is here that the value of the program is at least considered by the initiator.

So, why do I mention impact? A quick look at the Dow Jones Industrial Average or for that matter, just about any other index and you can get a sense that people are a bit nervous about money. Funders, be they government, private individuals, or foundations are becoming a bit more selective as to who and what they are funding. They are no longer content with the vague statements of "people learned something", "the participants enjoyed the experience", etc. Rather, they are starting to ask the question - "did the money really result in change or at least forestall things getting worse?" It is within this context that nonprofits now promote their work.

However, there is more to the story...

While I work for a funder and answer to a Board of Directors that is concerned that their investments have an impact - there is also a notion that good business sense would indicate that any viable organization has a strategic plan and their activities support that plan. Imbedded in the concept of the strategic plan are goals and statements of impact. As such, any organization that implements something new, or is maintaining some aspect of their programming should have linkages between their actions and the global impacts their organization is trying to effect.

And so we get to the relationship of value and evaluation. In my case, I do not think it is always useful and/or appropriate to attempt to measure impact as part of an evaluation - but I do believe that evaluation should recognize that all programming should have some sort of impact. If a program is in its early stages, the data might not be there yet to see change, but it might be there for a pre-test. Further, since we are on the topic of value, the impact of the program might not be the focus of the major stakeholders at the time - but it should be acknowledged if only to inform the process evaluation (description of the context of the program and rationale).

But, before I push further and make the argument that evaluation doesn't exist if there isn't acknowledgement of planned impact or value, I need to clarify a specific distinction. Namely, there is a difference between the methodology of an evaluation and the purpose of an evaluation. Certainly, they are linked - but often times I hear the statement - "we can't consider impact of a program because it won't really happen until well past the funding ends". This statement is not focused on the methodology of the evaluation, not the purpose. It is an excuse statement that attempts to take any consideration of impact off the table and focus everyone on only the actions of the program. Again, it is important to assess the process of a program, but without understanding the end (impact), a significant portion of the picture, that should inform the process evaluation is lost. We can speak to the number of hours of training we give to septagarians and their experience with a program, but if we are attempting to change attitudes towards violence, unless their voice is really respected in their communities, odds are we are barking up the wrong tree. Now, hopefully, the training is actually deployed to individuals of all ages and the septagarians are just part of the mix, but if the methodology of the evaluation is not informed by the purpose of the program (and its goal impact), the training evaluation focus might change from sampling the elderly to sampling younger individuals. Further, in the case of social marketing, what speaks to the elderly might not reach the 20 somethings. And so, the data we gather in the methodology of the evaluation that may indicate that the 70 somethings are really enjoying and learning from the training, might not be indicative how the rest of the "target audience" feels or learns.

So, I pose the argument that value and consideration of impact is important - whether it is really measured or not.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper
The Evaluation Evangelist

Monday, March 16, 2009

Introductions

Good day to you,

Having found this Blog, you came here for one of a few reasons:
  1. You are interested in evaluation and the idea of an Evaluation Evangelist excited you.
  2. You met me or one of the group excited by evaluation and heard about the site and thought you might take a look.
  3. You followed the link from the Missouri Foundation for Health's website to my Blog.
  4. Google hiccuped and your search brought you here versus whatever you were searching for.

Hopefully, having found this, you will find some information of interest. As this is the first post on this Blog, I'm not sure what fruit this experiment will bear, but expect this to be worth both our whiles.

With the caveats above, let's delve into this further shall we?

What is an Evaluation Evangelist? To answer that, we need to discuss what is Evaluation and what is an Evangelist. There are long dissertations tied to both terms and at some point, I'll probably edit this posting to incorporate links to good descriptors, but for now let's go with a working definition.

Evangelist - Two "definitions" I like can be found here - http://en.wikipedia.org/wiki/Evangelist: Namely, in the religious connotation - a Christian [in my case an Evaluator] that explains his or her beliefs to a non-Christian [or in my case folks not yet versed in Evaluation] and thereby participates in Evangelism [attempting to convert people]. The second definition secularizes the "role" of an Evangelist by describing one as a person who enthusiastically promotes or supports something. Apple (http://apple.com) has had a group of individuals who held the actual title. In my case, the title is a commentary bestowed upon me by co-workers and others in the Evaluation field.

Speaking of Evaluation - There are probably a million different definitions for the term. However, the best version I like is something to the effect of; a systematic method of assessing something. Beyond that, there are discussions about the notion of whether an evaluation exists if there is an assessment of value (focus on outcomes of the project), what minimum point of rigor in methodology constitutes an evaluation, who needs to be involved in the process, and on and on and on. In reality, it seems that globally, we all have different viewpoints on what constitutes evaluation and frankly, I do not consider myself a part of any real distinctive camp - except one... I believe that an evaluation should be "theory-driven". In other words, those things we do are intended in some manner to affect an outcome. Put more simply, we do stuff because we want something to happen. When I turn the key of the starter of my car, I'm doing so because I want the car to start. There is a direct and perhaps a series of indirect outcomes of my action. But, I did what I did for a reason. Theory-driven evaluation is the idea that the actors, the evaluator, and all other stakeholders (people interested and invested in the project or action) are expecting something to come of the action and are looking for the "impact" of the action. Now, there are times that we do not measure the "impact" - usually because it is expected to occur much later. Other times, a description of the actions are important to all interested parties. But, my focus is on the idea that while we might only focus measurement on the actions and their produced outputs (products), we need to recognize the connection with the eventual outcomes (impacts). Using my example of turning the key - I can describe the amount of torque I use to turn the key, or for that matter, the process of finding and inserting the key... From a car design perspective, these are important things to measure - but the reason the key slot is there and the starter mechanism is to get that car started.

I'll speak more on the topic in a later post. In the mean time, John Gargani has an interesting commentary on the impact of a less than systematic assessment and how theory needs to play a role in some manner - http://gcoinc.wordpress.com/2009/03/13/data-free-evaluation/#more-161.

Returning to a little about me - my true role and title is Director of Evaluation for the Missouri Foundation for Health (http://mffh.org). The Foundation funds programs and projects focused on improving the health of Missouri's uninsured, underinsured, and underserved. I have oversight for evaluation for the Foundation, which means that I shape policy around evaluation as well as work with our contracted evaluators. I also have the occasional opportunity to conduct my own evaluations.

In the next series of posts, you will learn more about the policies we have enacted and how we have implemented them both within the Foundation and in support of the nonprofit community in Missouri.

Additionally, as you follow this Blog or catch a Twitter or Facebook comment, it is my fervent hope that you grow to love Evaluation as much as I do.

Best regards,

Charles Gasper