Friday, May 15, 2015

What can we learn from a salad?

What constitutes a salad?  What is necessary for something to be a salad?

This was a question my family was kicking around last week - trying to define a salad.  Yes, we have interesting conversations in the Evangelist household.

Merriam-Webster defines a salad as “a mixture of raw green vegetables (such as different types of lettuce) usually combined with other raw vegetables” or “a mixture of small pieces of raw or cooked food (such as pasta, meat, fruit, eggs, or vegetables) combined usually with a dressing and served cold."

But wait!  That could also describe a cold soup!  And, I have had a “hot salad” before as well.  The definition isn’t complete.

We finally came to the conclusion that a salad is like a chair - we are pretty sure we know what typically each is made of, but really it can be multiple things.

How does this relate to evaluation?...  I’m so glad you asked!

One of the key role as an evaluator is describing, or better said, defining the program, project, initiative, system that we are evaluating.  Some are easy.  We know that a pencil is some sort of writing implement with a soft, marking material that is usually graphite (previously lead) surrounded by a material, usually wood.  Well - perhaps even that varies.

What can we learn from these analogies?  That even something “easy” still has a great deal of variability when we wish to define, especially when we generalize.

So, as an evaluator, I can describe what I see.  My salad consisted of a collection of green leaves (of multiple shades of green and shapes - some with stems), grated provel cheese (see here - http://en.wikipedia.org/wiki/Provel_cheese), olives, and a creamy Italian dressing.  It was delicious (ok, so I do some valuing as an evaluator as well).  I can define this salad in various levels of detail, but it does not do to generalize well to other salads.  As we ticked off a list of salads, we at first thought we had something generalizable - some sort of sauce.  It can be found in potato salad, hot spinach salad, even pasta salad.  But…  There are other things that come with sauce - my boiled ravioli with meat sauce for instance (and yes, it too was delicious).

So, the problem was with generalizability.  I can do a great job evaluating the salad and for that matter the rest of the meal.  I can describe it in various levels of detail. I can provide a value judgement on the salad - in fact, I did so above.  But, I can’t generalize the salad to other restaurants.  

But that isn’t quite true is it?  Granted, provel cheese is not normally found in other cities other than St. Louis - but a different cheese can be substituted.  Further, go to a Greek restaurant, and you might get the olives, but the sauce might be different.  And so it goes.  As we move away from an Italian restaurant to other types of restaurants, moving farther geographically, the contents of a salad change.

There are two things we can learn from this.

First - if you ask the chef at the restaurant where I dined (and I did), he most assuredly will tell you that he didn’t make the salad (design and implementation) to be generalizable.  He wants people to come to his restaurant for his salad - not to have people replicate it elsewhere and steal his business.

Second - that context matters.  The culture of the restaurant defines a significant amount of the salad.  I’m not getting a salad like what I described in a Chinese restaurant.  I might get closer in a Greek or French restaurant.

So - you know how everything ties back to evaluation for me.  Let’s explore those two learnings from a programmatic frame.

We have a program developed by an organization (the salad) that frankly, the organization wants to make as unique as possible to differentiate themselves from other organizations’ programs.  The program is designed for a specific culture and environment and the designer isn’t interested in other applying it to their settings.  As an evaluator (funders, you should think about this too), we are going to learn a great deal about the program (salad), we can explore it in depth - but generalizing it to other environments is going to be highly challenging and depending on how different the context/culture you are exporting it to - require such modifications that the new program (salad) would be impossible to tie back to the original design.

Is the cause lost?  No, we just need to pay attention to the context and culture.  There are things in my Italian salad that would work in a Greek salad.  Green leafy vegetables are enjoyed by both pallets.  Olives work as well.  Even the dressing is similar in constituent components.  As the culture differs more significantly, the capacity for generalizability degrades.  The green leafy vegetables might be present in my Japanese salad, but olives are probably not going to be present and the dressing will be significantly different.  Even the container and method by which I consume the salad may be different.

As evaluators, we are often asked to pull out these important lesson for our clients.  In the case of a program that is built and designed for a specific context and culture (frankly, I would say most are) - we need to know and understand how the context and culture affected the program design and implementation.  What tools are present? ( Eating with a fork or chopsticks?  Is cutting the greens at the table ok or is a knife even present?)  Miss these and you are going to advice an organization incorrectly.

So, we must pay attention to the context and culture (environment) of the program, project, system - but we must also understand, if there is an interest in generalizability, the environment to which we wish to port the program to determine what modifications might be necessary.

I’ve been on the soapbox for a number of years that evaluators should be involved in program design - here is a great example of where they can be most helpful.  Often, they are engaged to do some sort of summative evaluation with the thought of taking those learnings and applying them more generally.  But, there is a disconnect that often occurs.  The evaluation is completed on the original program.  The report is created with little thought beyond description and value assessments of the program.  And the funding organization for the evaluation takes those findings and designs and implements something.  Often, the evaluator only knew it was a summative evaluation and did the job - they may even know the purpose, but I’m going to ask a question here…  How many evaluators take it the step farther to ask - “where are you planning on generalizing this program to?”  How many take it the step farther to incorporate an assessment of context and culture for the new environments?  Granted, those steps are often not funded or even considered (most don’t know where they plan to implement next).  But, by keeping the evaluator involved in the program design for the generalized version - they can serve as the critical friend to talk about context, to bring in key people in the communities to share their thoughts - to test what is going to work in the new environments.

As a result, you will wind up with less discordant programs.  Ever see a pizza served in a Chinese restaurant?  These occasionally find their way to kids menus.  I wonder how often they are bought and eaten?

As evaluators and consumers of evaluation, I’m curious to hear your own thoughts on this.  Do you think of these things when you are considering evaluations?  Have you run into programs, projects, systems that are so tailored to a certain environment that the generalizability would be extremely difficult?  Have you a definition for salad that addresses all the possible combinations - including pasta, meat, hot, and fruit salads?  Are we asking too much to attempt to define beyond what we see, to create artificial categories/structures to pin programs to?  If we reject those, how do we learn and share outside contexts?

As always, I’m open to your comments, suggestions, and questions.  Please feel free to post comments.


Best regards,

Charles Gasper

The Evaluation Evangelist

Friday, May 1, 2015

Evaluating Collaboratives - Exploring the Symphonic Metaphor

In my previous Blog, I mentioned that I would be visiting the symphonic metaphor again in the future.  Well, welcome to the future!...

At the time of my writing this, we still don’t have flying cars or jetpacks.  What we do have is a focus on collaboration of multiple sectors to affect positive change in communities.  There are many brands for this type of work, but in reality, it is just organizations of many types (nonprofit, business, civic, etc) and individuals (concerned citizens, elected officials, etc) coming together to try to solve an issue.  To affect this work, there are many steps - and of course, evaluation can offer support to each step.

 

Identifying and Agreeing Upon an Issue

To get there is no easy task.  There are many steps and much can get in the way.  The first issue is identifying what is important.  I’ve a bit of experience with this and you would be surprised at how difficult it is to come to an agreement about what constitutes a community issue.  While not considered a specific evaluative domain by many people (how often have I heard, “there’s nothing to measure yet, we don’t need you), many of the skills evaluators engage can be of use.  Some of the methods I’ve used include:

  • Visioning exercises - These are great for getting people to present issues in a positive manner and often also can be used to establish the goal(s) of the collaborative.  Some prompts have included:
    • It’s 20 years from now and CNN, CNBC, Fox News (whomever) is talking about the major change that happened in your community, what was it?
    • You are being interviewed for the newspaper about what you accomplished, what was it?
    • You are met with a genie and given 3 wishes for your community, what are the things you wish for?
  • Service overlap mapping - This is great for starting the conversation around what people/organizations are bringing to the table.  This is like a heatmap versus a geographical map.  Here we often follow with additional questions:
    • Why are you providing the service?  (And you can’t just say there is a need.)
    • Where are there gaps on the map (service deserts)?  Why are they there?
    • What do the services have in common?

The neat thing about the two above methods is that you are attacking the problem from two different directions.  In the first case, you are just aiming for the result (impact, outcome).  In the second, you are looking at what people are doing and allowing them to weave it together into a meaningful result for the group.

Incidentally, you are also starting the set up of your program theory and evaluation framework as you are establishing the long-term outcomes they are collectively shooting for and then working backward to individual organizational outcomes and activities.

 

Identifying and Agreeing Upon What the Collaborative Is Doing (Or Will Do)

As an evaluator, you want to know what the activities are.  As a community activist, you want to know what your partners are going to do to support the cause.  This is another sticky issue as many organizations/individuals might not recognize the contributions of others as relevant or appropriate.  This is where I like to help by using the results of the previous work.  We have our agreed upon impact - we now need to agree upon what outcomes predict success.  We often rely on the organizations and individuals to provide us with their theories of impact (we can talk about this another blog post in the future).  When drawn out and discussed, the map can look something like this:

NewImage

 

 

 

 

 

 

 

 

 

A fantasy author, Michael Moorcock is the originator of the design idea - his symbol for chaos.  And it is chaos that can occur if there isn’t “alignment” of the efforts - in essence, the community’s impact goal is never achieved because everyone is pulling hard, but in different directions.  The evaluator, through the clarity of the theory of impact can help the organizations and individuals involved see can happen and with data, may be able to articulate it.  This service helps the group agree upon efforts.

 

Note of Caution

Please note, I’ve simplified this.  In reality - we are about 2 or so years into a collaborative’s work and if we are lucky, we now have agreement on what we are trying to accomplish.

 

Changes

So we have agreement on what we are trying to accomplish and we are in theory pulling in the same direction.  As part of this process, you are going to be talking about definitions and clarifying indicators of activity and outcomes.  Well - now the evaluator moves to a more traditional role, tracking activities and outcomes.

Much like any individual program, there are changes that occur.  All are often focused on the impact on the community as measured by these changes.  However, there are other impacts that seem to accompany collaborations.

  • Changes in relationship and collaboration among the partner organization and individuals
  • Individualized organizational change

When thinking about these collaborations, we really need to attend to all of these.  There are shifts that occur in capacity.  While I’m plugging the work of my organization here - the TCC Group has a fantastic paper on what we call Capacity 3.0 - http://www.tccgrp.com/pubs/capacity_building_3.php.  It speaks to how we need to build capacity thinking about the social sector ecosystem and how organizations need to understand, respond to and structure themselves to adapt to changes in the ecosystem.  Well - this informs some of my own thoughts, not just from one organization’s standpoint, but across a collaborative.  Partners need to see those changes and calibrate to collaborate effectively.  The evaluator can provide that data, if they are tracking all three change arenas (not to mention also looking at the other environmental factors).

And So On To the Symphony

As a collaboration forms, we are able to see how the symphony is a good metaphor.  Prior to the curtain going up and the conductor taking the stage, we have sounds of music.  As each instrument tunes, their individual melodies of practice float through the air.  In combination, they are sometime discordant and chaotic, but there are also moments were they seem to flow into a strange synergy.  These are those accidental combinations that can occur in the field.  But with the conductor (not the evaluator - we just are the critical friend/listeners), we can help the orchestra practice.  Issues such as:

  • Choice of music
  • Selection of instruments for the piece
  • Sheet music to follow
  • Parts for the instruments to play
  • Timing and pace of the piece

Can be addressed.  And like the orchestra, this work takes practice to improve.  The evaluator helps by providing the feedback to the conductor and the other partners in the piece - providing feedback to the key council or leadership of a collaborative and partner organizations.

As always, I’m interested in your thoughts.  Please post comments, suggestions, or questions about what I’ve shared.  I’m interested in learning from you as I share my own thoughts here.  Please feel free to post comments.

Oh - one more thing…  While I did allude to my employer - the TCC Group.  Please note that these are uniquely my thoughts and do not necessarily represent the thoughts of the organization.

 

Best regards,

Charles Gasper

The Evaluation Evangelist