Friday, May 15, 2015

What can we learn from a salad?

What constitutes a salad?  What is necessary for something to be a salad?

This was a question my family was kicking around last week - trying to define a salad.  Yes, we have interesting conversations in the Evangelist household.

Merriam-Webster defines a salad as “a mixture of raw green vegetables (such as different types of lettuce) usually combined with other raw vegetables” or “a mixture of small pieces of raw or cooked food (such as pasta, meat, fruit, eggs, or vegetables) combined usually with a dressing and served cold."

But wait!  That could also describe a cold soup!  And, I have had a “hot salad” before as well.  The definition isn’t complete.

We finally came to the conclusion that a salad is like a chair - we are pretty sure we know what typically each is made of, but really it can be multiple things.

How does this relate to evaluation?...  I’m so glad you asked!

One of the key role as an evaluator is describing, or better said, defining the program, project, initiative, system that we are evaluating.  Some are easy.  We know that a pencil is some sort of writing implement with a soft, marking material that is usually graphite (previously lead) surrounded by a material, usually wood.  Well - perhaps even that varies.

What can we learn from these analogies?  That even something “easy” still has a great deal of variability when we wish to define, especially when we generalize.

So, as an evaluator, I can describe what I see.  My salad consisted of a collection of green leaves (of multiple shades of green and shapes - some with stems), grated provel cheese (see here - http://en.wikipedia.org/wiki/Provel_cheese), olives, and a creamy Italian dressing.  It was delicious (ok, so I do some valuing as an evaluator as well).  I can define this salad in various levels of detail, but it does not do to generalize well to other salads.  As we ticked off a list of salads, we at first thought we had something generalizable - some sort of sauce.  It can be found in potato salad, hot spinach salad, even pasta salad.  But…  There are other things that come with sauce - my boiled ravioli with meat sauce for instance (and yes, it too was delicious).

So, the problem was with generalizability.  I can do a great job evaluating the salad and for that matter the rest of the meal.  I can describe it in various levels of detail. I can provide a value judgement on the salad - in fact, I did so above.  But, I can’t generalize the salad to other restaurants.  

But that isn’t quite true is it?  Granted, provel cheese is not normally found in other cities other than St. Louis - but a different cheese can be substituted.  Further, go to a Greek restaurant, and you might get the olives, but the sauce might be different.  And so it goes.  As we move away from an Italian restaurant to other types of restaurants, moving farther geographically, the contents of a salad change.

There are two things we can learn from this.

First - if you ask the chef at the restaurant where I dined (and I did), he most assuredly will tell you that he didn’t make the salad (design and implementation) to be generalizable.  He wants people to come to his restaurant for his salad - not to have people replicate it elsewhere and steal his business.

Second - that context matters.  The culture of the restaurant defines a significant amount of the salad.  I’m not getting a salad like what I described in a Chinese restaurant.  I might get closer in a Greek or French restaurant.

So - you know how everything ties back to evaluation for me.  Let’s explore those two learnings from a programmatic frame.

We have a program developed by an organization (the salad) that frankly, the organization wants to make as unique as possible to differentiate themselves from other organizations’ programs.  The program is designed for a specific culture and environment and the designer isn’t interested in other applying it to their settings.  As an evaluator (funders, you should think about this too), we are going to learn a great deal about the program (salad), we can explore it in depth - but generalizing it to other environments is going to be highly challenging and depending on how different the context/culture you are exporting it to - require such modifications that the new program (salad) would be impossible to tie back to the original design.

Is the cause lost?  No, we just need to pay attention to the context and culture.  There are things in my Italian salad that would work in a Greek salad.  Green leafy vegetables are enjoyed by both pallets.  Olives work as well.  Even the dressing is similar in constituent components.  As the culture differs more significantly, the capacity for generalizability degrades.  The green leafy vegetables might be present in my Japanese salad, but olives are probably not going to be present and the dressing will be significantly different.  Even the container and method by which I consume the salad may be different.

As evaluators, we are often asked to pull out these important lesson for our clients.  In the case of a program that is built and designed for a specific context and culture (frankly, I would say most are) - we need to know and understand how the context and culture affected the program design and implementation.  What tools are present? ( Eating with a fork or chopsticks?  Is cutting the greens at the table ok or is a knife even present?)  Miss these and you are going to advice an organization incorrectly.

So, we must pay attention to the context and culture (environment) of the program, project, system - but we must also understand, if there is an interest in generalizability, the environment to which we wish to port the program to determine what modifications might be necessary.

I’ve been on the soapbox for a number of years that evaluators should be involved in program design - here is a great example of where they can be most helpful.  Often, they are engaged to do some sort of summative evaluation with the thought of taking those learnings and applying them more generally.  But, there is a disconnect that often occurs.  The evaluation is completed on the original program.  The report is created with little thought beyond description and value assessments of the program.  And the funding organization for the evaluation takes those findings and designs and implements something.  Often, the evaluator only knew it was a summative evaluation and did the job - they may even know the purpose, but I’m going to ask a question here…  How many evaluators take it the step farther to ask - “where are you planning on generalizing this program to?”  How many take it the step farther to incorporate an assessment of context and culture for the new environments?  Granted, those steps are often not funded or even considered (most don’t know where they plan to implement next).  But, by keeping the evaluator involved in the program design for the generalized version - they can serve as the critical friend to talk about context, to bring in key people in the communities to share their thoughts - to test what is going to work in the new environments.

As a result, you will wind up with less discordant programs.  Ever see a pizza served in a Chinese restaurant?  These occasionally find their way to kids menus.  I wonder how often they are bought and eaten?

As evaluators and consumers of evaluation, I’m curious to hear your own thoughts on this.  Do you think of these things when you are considering evaluations?  Have you run into programs, projects, systems that are so tailored to a certain environment that the generalizability would be extremely difficult?  Have you a definition for salad that addresses all the possible combinations - including pasta, meat, hot, and fruit salads?  Are we asking too much to attempt to define beyond what we see, to create artificial categories/structures to pin programs to?  If we reject those, how do we learn and share outside contexts?

As always, I’m open to your comments, suggestions, and questions.  Please feel free to post comments.


Best regards,

Charles Gasper

The Evaluation Evangelist

No comments:

Post a Comment