Friday, October 23, 2009

Evaluation, Standards of the Profession

Well, it certainly has been some time since I posted here. As I mentioned earlier, my goal is to post twice a month. Clearly, I’m not meeting my milestones. After a review of my process, I have modified my plan and expect you will see more posts in the future, meeting that goal.

Today’s topic focuses on the professional side of evaluation. Up till now, I’ve been presenting the rationale for evaluation and its importance in organizational and program development and maintenance. In the past few months, I ran into an issue that has made me think more about the profession of evaluation and standards. Funders and nonprofits are often faced with the decision of hiring outside their organization for evaluation support. How does one pick an evaluator and how do you know if an evaluator is good?

These are great questions and something that I am still struggling with. I would say that the contractors I work with are fantastic. Yet, the only commonality that they share is that they are university based. Otherwise, they have different backgrounds, different skill sets, and different views on how evaluation should be conducted. Last year, I brought them together to share their work with each other and it is something that our Foundation will continue to do. The differences amongst these contractors were rather striking with some noted below:

· Quantitative versus qualitative methods

· Different focus on aspects of program (e.g. sustainability, quality improvement)

· Different background training and focus (e.g. public health, public policy, education)

However, there was a common factor that all shared. They had training in and followed “research” methodologies tied to their backgrounds. While there are some language differences, individuals trained in public health, psychology, social work, and sociology all have taken basic levels of social science research methodologies. As all of the evaluators are university based, they are required to conform to human subjects rules and pass their designs through an Internal Review Board (IRB). That is a very large commonality and it constrains the work that they do. Further, it develops a form of minimum standard for these evaluators.

But evaluators aren’t just based at universities. There are many independent contractors that provide evaluation services. These contractors can come from similar backgrounds to the ones I listed above, but can also have other backgrounds that vary in education (type and level), philosophy, and technique. Those without social science backgrounds may have different standards of “research” that they have learned. Finally, most of these contractors are not subject to some form of IRB. As a result, there is the possibility of greater variation. The purpose of these thoughts are not to speak to the idea of variation, for I believe that it can be both good and bad, depending on the situation, needs of the stakeholders, etc. Rather, I want to look at this issue from a concept of minimum standards.

So, to identify a minimum standard, we need to all agree on what is evaluation. Again, we can argue this as different cultures have different views on this. Instead, let us assume that you and I have are own definitions with the common idea that at the end of the day, we will have the information we want to have. So, I would argue that the first standard of evaluation is really driven by the needs and wants of the primary and associated stakeholders. In my framework, that means the development of a theory-based logic model of some type that links what the program or project or whatnot is doing with outputs and outcomes that we are trying to affect which will in turn inform my team as to what they might want to know. Additionally, there are other strategic needs that can inform the evaluation design and minimum standard for review (e.g. organizational strategic focus, environmental assessment needs)

Once this first standard of informational need is identified, we now have the minimum standard of what we want to know. The next step is to identify the how and what will be done or some sort of methodological standard. This is where things get a big complicated, but can be teased out/cleaned up.

To begin, there is the basic rule of human subject rules that borrows a bit from the Hippocratic oath – “do no harm”. If some harm must come to the participants, then the benefits of what are learned must outweigh the cost and reasonable efforts must be taken to ensure that the damage is addressed. Incidentally, I would propose that the organizations engaged should also be viewed in this manner. The evaluation should not damage the organization and reasonable efforts should be taken to ensure that any damage is addressed. Unfortunately, I have had an experience in which tenets of this rule were not applied at the organizational level (the aspect of informed consent to the evaluation) and some damage was done and worse, ignored. So, the second standard of professional evaluation should be not to harm the individuals, programs, or organizations engaged in the process.

I should clarify, that the manner in which the individuals and organizations go about applying their evaluation derived information can and should be covered under this as well. It is the evaluator’s responsibility to ensure that the organization that receives the information is given every opportunity to correctly interpret the information. However, beyond ensuring that the information is interpreted appropriately, I don’t bind the evaluator.

The third standard would be acceptance and willingness of the evaluator to be bound by the guiding principles of the American Evaluation Association. - http://www.eval.org/Publications/GuidingPrinciples.asp. In essence, the guiding principles cover the first two standards listed above, but I feel them so important as to separate them out. However, the guiding principles also address in general the concepts of systematic inquiry, including education of the stakeholders on methodology and limitations, evaluator competence, integrity, respect for people, and responsibilities for general and public welfare. While membership in the American Evaluation Association does not indicate that the evaluator considers themselves “bound” by these principles, they have been made aware of them in various forms including –the Association’s website and the American Journal of Evaluation.

Earlier this decade, members of the American Evaluation Association discussed the development of constraints to better define an evaluator. Ideas floated included an exam and some sort of certification. Even this year, membership still struggles with identification and development of a tighter, more distinguishing definition of an evaluator. Again, one can find calls for an Association related certification, but given the breadth of what defines evaluation, a single test or even series has been rejected by other membership. Many universities provide training in evaluation and/or evaluation linked skills - http://www.eval.org/Training/university_programs.asp as well as other organizations that provide professional training and in some cases certification. This patchwork of diplomas, certifications, and training provide something in the area of constraint. One will have a better sense of the skills and training of a graduate of the Claremont Graduate University or Western Michigan’s programs, but it requires the person hiring said evaluator to be familiar with the programs. That means that I, as Director of Evaluation for my Foundation, must be familiar with these and other programs. Fortunately, I’ve been a member of the Association for several years and have had a goodly amount of contact with faculty and graduates of these programs. I have not had much contact with faculty and graduates of American University or California State University, Los Angeles. I have known people to attend the Evaluator’s Institute - http://www.tei.gwu.edu/, but am unfamiliar with their work and know little other than the brochures that lap against my inbox on a yearly bases about that training. So, what is a Director of Evaluation for a foundation to do, or for that matter, a Director of a nonprofit, when reviewing a proposal from a potential contractor?

First, know what it is that you want out of an evaluation. What is the information you want to know about the program(s)/project(s) and document it. It has been my experience that when presented with a vacuum, evaluators will build what they can into the evaluation’s structure. While some serendipitous information that could be of value can be discovered, it is far better to give the contractors a sense of what you and organization wish to learn. This information should be incorporated into the request for proposals (RFP). Second, the RFP should also include a requirement that the contractor agree to and adhere to the American Evaluation Association’s Guiding Principles. Finally, request to see previous work from the contractor, to get a general sense of philosophy and style of evaluation.

In review of these documents, think about your organization and the stakeholders of the evaluation. Do the stakeholders value one methodology for garnering information over another? Will the evaluation provide you with what you want to know? Really, the question is - is the contractor and their evaluation design a good fit for you? That fit, agreement in philosophy, focus, intent, and concern is critical. Without that fit, the most rigorous evaluation design that develops all sorts of potentially useful information will lay fallow for a lack of investment of the stakeholders.

Incidentally, I struggle with selection of contractors for our evaluations, much as others do. I value the diversity that makes up the Association and the profession of evaluation, so I oppose stricter constraints on the use of the title of evaluator. However, the above is the “best methodology” I’ve developed to select contractors.

As always, I'm open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

1 comment: