Wednesday, February 23, 2011

Language and Evaluation

A great many people have spent a great deal of time thinking about what differentiates evaluation from research. I won’t press too far in this, other than to share that if you Google for the statement “difference between evaluation and research” as of the posting of this Blog, you would get over 5000 hits. Now, I’m sure there is much repetition in the form of quoting others and the like, but still – 5000 pages in which that statement occurs. Well, I’m going to talk about one aspect that affects my life almost on a daily basis – issues of language.

The current state of the art of evaluation suggests that a good evaluator is one that engages his stakeholders. What does that really boil down to? You have to talk to the stakeholders. Now, stakeholder is actually a very large word – and no, I don’t mean the number of letters or the fact that it might be a higher order vocabulary word for the SAT, ACT, or GRE. Rather, the concept of a stakeholder can spread across many different groups of individuals depending upon what sort of approach and philosophy you have about programming and evaluation. I’m not going to go into the various possible combinations, but suffice to say that you can be dealing with people who fund the program, implement the program, participate in the program, are not participating in the program, are in the community where the program is implemented, are not in the community where the program is implemented, and on and on and on. The combinations aren’t so much important to this Blog post as is what consists their background, understand, and vocabulary.

A few years ago, I had a discussion amongst individuals who all work for the same organization. These individuals all held the same position within the organization. This organization used and currently still does use the word – Objective. When asked what the word meant, the broadest definition could be – what the program is trying to achieve. However, that is where things broke down. For some, Objective meant a Programmatic Outcome. For others, Objective equated to a Programmatic Output. Still for others, an Objective was an Organizational Outcome. And for yet another group, it was a change in Organizational Infrastructure. All were focused on “Measureable Objectives”, but no one really agreed on what an Objective was. After a year’s worth of discussion and negotiation, we came the agreement that and Objective would be a Programmatic Outcome or Organizational Outcome. At least we got it to an “Outcome”.

When was the last time you had to have a discussion about the language amongst researchers? Ok, those of you who do language research, put your hands down! You get my point, I hope…

But the point was driven home to me again today. In a meeting with folks that I met with, along with another three evaluators, we discussed an evaluation project we are designing. During this meeting, I uttered another word, that I thought we all understood – “Comparison Group”. And was shocked to discover that their impression of what the term meant and my own impression and that of the other evaluators diverged. When they heard “Comparison Group”, they translated that to “Control Group”. They had a decent definition of a Control Group and we all know that engaging a Control Group for a study can require significantly more resources than engaging a Comparison Group, especially when the Comparison Group is not individually matched.

[Pausing for a moment here, because my own language may differ from your own… Control groups are usually associated with random control trials (RCT) and the costs of engaging in a RCT in community based programming and evaluation are very high. Control groups are a subset of comparison groups, which are just a group with whom you compare the outcomes of the group that experienced your program.]

The meeting around this study was rapidly devolving and the design was in jeopardy until I figured out that this was a language issue and not a design problem. The team had agreed to the design. They were under the impression that I was forcing a more rigorous study that would be costly across several domains. I was under the impression that they were stepping back away from the design and wanting something significantly less rigorous. Conflict was brewing. Fortunately, the issue of language was identified before things spun out of control.

I’ve presented the idea before and I’ll present it again. We need better-informed consumers of evaluation. Too often, I find myself and other evaluators changing language and/or dropping evaluation vocabulary out of discussions to attempt to avoid misunderstandings. I’m starting to wonder whether we are doing our clients and ourselves a disservice for this. In our own desire to make things easier for everyone in the short-term, we might be causing issues for the next evaluator. Worse, like the discussion around the term Objective, our looseness of language might cause more confusion. I’m considering short study for myself – to keep the evaluation language in and attempt to be more precise in my definitions with my clients – to see if I can reduce confusion. Anyone else want to give this a try? I would also like to hear your thoughts on the idea.

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, February 9, 2011

The Man in the Middle

I started writing this post about two days ago and discovered rather quickly that I was writing more than should fit in one Blog post – so… Instead I’m going to subject you to a series of posts discussing the rationalization of the wants/needs of the funder around evaluation with the wants/needs of small to medium sized nonprofits with whom I’m familiar. Tossed in will be some reflections on some reading I’ve been doing for school and of course, you get my own opinions and thoughts – full force!

To begin, I would suggest you take a look at my Blog post in January (2011 if you are reading this years from now – hello future people!). Go ahead – I’ll still be here when you get back…

So, you read my comments about my frustration with the lack of outcomes information coming to me from organizations soliciting me for donations. Well, those winds of change came quick and a partner in funding is looking for outcomes as is my own Board. My CEO, who gave me the name of the “Evaluation Evangelist” has pointed out to me a few times – “a prophet is rarely heard in his own land” and my previous warnings about nonprofits and foundations needing to attend to outcomes (versus outputs and other processes) was unheeded. And as with all crises, I think we are at the beginning of a change.

Before I go further, I should tell you that while I believe that we should always consider outcomes of programs, projects, advocacy, and whatnot – there is a time and place for evaluating said outcomes. This is tied to the questions the stakeholders have for the evaluation and what is possible to measure, given the theory of change of the program. Today, the “top” funding stakeholders are asking for outcomes and unfortunately, because of their attention, nonprofits are going need to react. Why do I say, “unfortunately”? - Because the interest in programmatic outcomes didn’t originate in the nonprofits delivering the program.

Granted, I have access to a small number of nonprofits, but in their study of nonprofits – Reed and Morariu (www.innonet.org) found that more than 75% of the nonprofits spent less than 5% of their budget evaluating their programs – 1 in 8 spent no money. Additionally, funders were characterized as the “highest priority audience for evaluation” and surpise – outcomes and impact evaluation were rated as the highest priority. So, my experience with nonprofits, while small, does seem to echo the broader population of nonprofits.

So, if this has been as it always has been – otherwise we wouldn’t have the results of the Innovation Network’s State of Evaluation 2010, why would I be concerned? Sure, I have been an advocate for evaluation use and just because I’ve been advocating for it (bigger names than mine have for a lot longer), that shouldn’t affect change. In fact, one could argue that I should be pleased – interest in evaluation is increasing in the funding community. Except, there is little education for the funding community around evaluation. There is little use by the funding community around evaluation. And the expectations that are coming out of the funding community are the equivalent of taking an older car that has never gone faster than 20 miles per hour and slamming on the accelerator to go 80 miles per hour (for those of use that use metric, you can substitute KPH and still see the analogy). Nonprofits that had at best conducted some pre-test/post-test analyses of knowledge change in participants in their program (more likely did a satisfaction survey and counted participants) are now being required to engage in significantly more sophisticated evaluations (ranging from interrupted series designs to random control trials). The level of knowledge required to conduct these types of studies with the implied rigor associated with them (I say implied if only because I can find a comparison group for anything – it just might not be appropriate) simply does not reside in most nonprofits. They haven’t been trained and they certainly don’t have the experience.

The funding community’s response is to offer and in some cases require an external contractor to support the evaluation. This could lead me to talk about difficulties in finding qualified evaluators, but we won’t talk about that in this post. It is an issue. However, what occurs with the involvement of an external evaluator? They do the work to support the funder’s objectives and after the funding for the project ends – they tend to leave too. There is also an issue around funding the evaluation at the level of rigor required – that too will come in another post. But, the message I want to leave you with here is that engagement of an external evaluator does little to increase the buy-in, much as less, capacity for the organization to engage in internal evaluation. The “firewall” preventing bias of an internal evaluator (e.g. organizational pressure to make the organization look good), while certainly improving the perception of the funder that the evaluation is more rigorous, does little to help the nonprofit other than to aid them in maintaining the current cash flow. [Incidentally, I’ll address the internal versus external evaluator conflict in a later post as well. I think this is something we can all use to explore.]

So – what am I advocating for? Let’s not take that older car on the highway just yet. Let’s listen a bit more closely to evaluation thought leaders like David Fetterman and consider what we can do to improve the capacity for organizations to do their own evaluations. Let’s show them how attending to outcomes might help them improve their organization and the services they provide to their participants. Perhaps we should think about evaluation from the standpoint of use and then apply the rigor that is reasonable and possible for the organization. Bringing in an external evaluator that is applying techniques and methods beyond the reach of the organization results in something mostly for the funder, not for the nonprofit. At best, the nonprofit does learn something about the one program, but beyond that – nothing. To them, it could almost be considered a research study versus an evaluation. Let’s partner with the nonprofits, get them up to the speed we want to get them, with careful consideration and deliberation versus just slamming on the accelerator.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist