A great many people have spent a great deal of time thinking about what differentiates evaluation from research. I won’t press too far in this, other than to share that if you Google for the statement “difference between evaluation and research” as of the posting of this Blog, you would get over 5000 hits. Now, I’m sure there is much repetition in the form of quoting others and the like, but still – 5000 pages in which that statement occurs. Well, I’m going to talk about one aspect that affects my life almost on a daily basis – issues of language.
The current state of the art of evaluation suggests that a good evaluator is one that engages his stakeholders. What does that really boil down to? You have to talk to the stakeholders. Now, stakeholder is actually a very large word – and no, I don’t mean the number of letters or the fact that it might be a higher order vocabulary word for the SAT, ACT, or GRE. Rather, the concept of a stakeholder can spread across many different groups of individuals depending upon what sort of approach and philosophy you have about programming and evaluation. I’m not going to go into the various possible combinations, but suffice to say that you can be dealing with people who fund the program, implement the program, participate in the program, are not participating in the program, are in the community where the program is implemented, are not in the community where the program is implemented, and on and on and on. The combinations aren’t so much important to this Blog post as is what consists their background, understand, and vocabulary.
A few years ago, I had a discussion amongst individuals who all work for the same organization. These individuals all held the same position within the organization. This organization used and currently still does use the word – Objective. When asked what the word meant, the broadest definition could be – what the program is trying to achieve. However, that is where things broke down. For some, Objective meant a Programmatic Outcome. For others, Objective equated to a Programmatic Output. Still for others, an Objective was an Organizational Outcome. And for yet another group, it was a change in Organizational Infrastructure. All were focused on “Measureable Objectives”, but no one really agreed on what an Objective was. After a year’s worth of discussion and negotiation, we came the agreement that and Objective would be a Programmatic Outcome or Organizational Outcome. At least we got it to an “Outcome”.
When was the last time you had to have a discussion about the language amongst researchers? Ok, those of you who do language research, put your hands down! You get my point, I hope…
But the point was driven home to me again today. In a meeting with folks that I met with, along with another three evaluators, we discussed an evaluation project we are designing. During this meeting, I uttered another word, that I thought we all understood – “Comparison Group”. And was shocked to discover that their impression of what the term meant and my own impression and that of the other evaluators diverged. When they heard “Comparison Group”, they translated that to “Control Group”. They had a decent definition of a Control Group and we all know that engaging a Control Group for a study can require significantly more resources than engaging a Comparison Group, especially when the Comparison Group is not individually matched.
[Pausing for a moment here, because my own language may differ from your own… Control groups are usually associated with random control trials (RCT) and the costs of engaging in a RCT in community based programming and evaluation are very high. Control groups are a subset of comparison groups, which are just a group with whom you compare the outcomes of the group that experienced your program.]
The meeting around this study was rapidly devolving and the design was in jeopardy until I figured out that this was a language issue and not a design problem. The team had agreed to the design. They were under the impression that I was forcing a more rigorous study that would be costly across several domains. I was under the impression that they were stepping back away from the design and wanting something significantly less rigorous. Conflict was brewing. Fortunately, the issue of language was identified before things spun out of control.
I’ve presented the idea before and I’ll present it again. We need better-informed consumers of evaluation. Too often, I find myself and other evaluators changing language and/or dropping evaluation vocabulary out of discussions to attempt to avoid misunderstandings. I’m starting to wonder whether we are doing our clients and ourselves a disservice for this. In our own desire to make things easier for everyone in the short-term, we might be causing issues for the next evaluator. Worse, like the discussion around the term Objective, our looseness of language might cause more confusion. I’m considering short study for myself – to keep the evaluation language in and attempt to be more precise in my definitions with my clients – to see if I can reduce confusion. Anyone else want to give this a try? I would also like to hear your thoughts on the idea.
As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.
Best regards,
Charles Gasper
The Evaluation Evangelist
Wednesday, February 23, 2011
Subscribe to:
Post Comments (Atom)
Charles, I have taught, done, and consulted on many evaluations for a long time. A recurring concern is the inconsistency of language. I once asked Michael Scriven about this inconsistency. His response was something along the lines of evaluators need to make terminology clear. Ok. Simple solution. Yet, the confusion continues.
ReplyDelete