Saturday, November 14, 2015

Pondering this week's events

Earth Image

Dear reader,

As you know, I’ve kept this blog going at an infrequent basis.  I had thought I would write on a bi-weekly basis on topics of interest, but allowing the system to develop on its own, am finding that I write when the inspiration strikes.  I’m embracing this position with the thought that I will only write when I think I have something useful/insightful/thoughtful.  I hope you find my thoughts today to fit at least one of those criteria.

For those of you who follow me on Twitter or are friends on Facebook, you are no doubt aware that I’ve been attending the 2015 American Evaluation Association conference.  Additionally, you have probably heard/seen/experienced the events in Paris.  Today was the last day of our conference and I’m now sitting in an airport lounge, waiting to travel back home.  I bring you these thoughts as I see a river of humanity flow by.  As an evaluator, it is my job to help individuals, organizations, and communities work to make the world a better place.  My efforts go to help these people articulate what they are trying to accomplish, how what they are doing should affect these changes, and to aid them in seeing if all this in fact actually happens.

In our recent conference, I was exposed to others, bent on similar missions - working with different people, engaging in different methods - but still all driving to help.  I watch the river of people flow by and I wonder, how many of them have similar thoughts, similar feelings, are personally and professionally dedicated to helping others.  Oh, and incidentally, I think you can be in just about any profession and have this orientation.

Today, I learned of the scale of horror that was perpetrated in Paris.  These individuals have lost sight of the notion of help, but choose to act to push their own position/opinion.  It isn’t a question of values, but of power and desire to exert it.  We see this not only this week in Paris, but in other parts of the world.  The dialogue around these things is only divisive, with polar viewpoints having the resources to broadcast through their actions and voice around the world.  There is no solution with these actions, only a flexing of power and perpetuation of status quo.

Today, I saw the public’s response to the horror that was perpetrated in Paris.  i saw people come together for moments of silence.  People changing their social media profiles.  People expressing their outrage and grief - perhaps as I am doing now.  But, beyond them, today I saw a different group of people.  A group of people I identify with - the helpers.  Mind you, the people at this most recent conference aren’t the only members of this group, but represent similar values and focus.

  • They are aware there are issues.
  • They are aware of who these issues affect.
  • They are aware of who can help address these issues.
  • They are concerned by the issues.
  • They are actively working with others to find the solutions.
  • They are actively working with others to affect these solutions.
  • They are actively working to improve upon those solutions.
  • They are actively working to ensure those solutions are sustained.

Many of these people, much as I, have expressed their grief at the most recent events in Paris as well as other events around the world.  Many of these events are not sudden, galvanizing events, but rather have been smoldering and simmering for generations.  As such, many of these things are not visible to much of the public.

Today, it is so easy to express grief or solidarity.  Change a photo on Facebook, like something, tweet or blog.  It is much more difficult to actually decide that you are going to be a helper.  We all affect change in our communities by simply existing within them.  A helper consciously chooses to do something because they think it might make a difference.  I’m suggesting to you, dear reader, that we can all be helpers.  In my case, as an evaluator, I do much of what I bulleted above.  I can do more and the events of this week (conference and what has occurred in the world) have reminded me of this.

I would ask that you consider whether you are a helper too.  Do you thoughts about your local and global communities end with a change in your Facebook status or a retweet, or are you doing something more?  Are you part of the polarizing rhetoric, driving any compromise or understanding away to focus only on an ideal or vision, or are you willing to step up and listen to all sides of the story, roll up your sleeves, and try to help?

As always, I welcome your thoughts and feedback.  Please leave comments with your thoughts.

Warm regards,

The Evaluation Evangelist

Monday, August 3, 2015

Muddling Mathematical Magic

Hello readers!

One nice thing about Blogs is that you get to be retrospective.  It allows me to think a bit about my practice and experiences - which of course I share with you.  In the vein of my recent postings, I’m going to get a bit personal with this one.  In this posting, we are going to talk about how mathematics, or more importantly, the manner in which evaluative questions are asked and answered can really present different “realities”.

One of the largest challenges I’ve faced as an evaluator is electing the right evaluative questions from my clients (internal and external).  Often, the process begins with the question, “what do we want to learn or know?”  But really, there is a deeper question, what constitutes meaningful information to the stakeholders?  Finally, there is the question, “how will the information be used?”

Reading my previous posts, you will note that I have an orientation toward using evaluation for learning versus promotion or marketing.  These orientations can coexist, but often the information is presented differently and frankly can also be limited due to a focus of use - most of us don’t want to share the negative sides of our organizations and work.  More importantly, the same data/information, presented differently, can result in these different realities.  Some time ago, I worked with an organization that was more marketing focused versus learning focused.  I’m going to use one of the key measures as an example.

The organization was interested in increasing the number of people engaged in a behavior.  Now, you can look at this a number of ways, the simplest mathematically and certainly more accurate is to do a simple count.  However, those numbers can appear low.  A slightly more complex way to look at this is to speak to percent increase.  Both use the same data, but question results in a different perspective.  If we had 5 individuals engage in the behavior, depending on the scope of the project, some stakeholders might be disappointed in the result.  Let’s say that target for the intervention was 200 people, suddenly the impact might not appear so large.  However, let’s say there was originally 5 people engaging in the behavior.  We can say that there was a 100% increase in engagement.  So we have this presented two ways:

  • 5 individuals our of 200 started engaging in the behavior after the intervention.
  • There was a 100% increase in the number of individuals engaging in the behavior after the intervention.

Which sounds better to you?  The basic evaluation question is the same, it is the same data, but the measures are different.  The difference is in the framing of the question and how it is reported.  I’ll make it worse…  the report could also state:

  • 2.5% of the target group started engaging in the behavior after the intervention.

Well, maybe that doesn’t sound as bad as the count of 5 after all.  In the third example, I left out the scope (200 targeted individuals).

I’m using this example to make a point, specifically to evaluators and consumers of evaluation.  It is critical to be clear around the evaluative question and what it really tells you.  I’ll complicate things more - what if I said that the count of 5 was statistically significant?  What would you say then?  Does your opinion change or is it already tainted by what I shared above.  Does it all into question the idea of statistical significance?

But, let me offer another piece of data - what if we aren’t talking about engaging in a behavior - volunteering, but successful intervention resulted in the individual living and the others dying?  Does this change your view?  What if we are talking about a clinical trial for inoperable cancer and 5 went into remission and the remainder did not?  Does the scale of impact shift your view?

My guess is that some of you said yes to some of the questions above.  As consumers of evaluative data, we bring to the table concepts around impact and scale.  A 100% increase sounds darn good no matter what - certainly better than we doubled those engaged.  Some might ask a further question and find out the impact was only 5 out of 200.  Depending on the relative strength of impact (life and death being probably the most extreme example), you can view that result differently again.  Add in the cost of the intervention and the question of value becomes more muddled.

I have a proposal…  As evaluators and consumers of evaluation, let’s ask for clarity.  When looking at impact, let’s look at expectations and the significance of those expectations as the “comparison group”.  In reporting and reading of those reports, let’s keep those expectations in the forefront of our minds and in the text of the report.  As learners, we would want to know this and frankly, as a consumer of marketing materials, I would want these things clear as well.

As always, I’m open to your comments, suggestions, questions, and perhaps dialogue.  Please feel free to post in the comments section.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, July 8, 2015

Have You Given Up?

Ok folks - I’m going to make a confession.  I gave up.  Yes, you heard it here first, I listened to the critics and tossed my hands up in the air and gave up.  And then a few things happened in my life to change that - I got a dog, changed jobs, among other things.  The most recent thing to affect me was of all things, a movie.  What many of you don’t know is that I’m not so secretly a fan of Disney and when the movie Tomorrowland was announced - well, I had to see it.  It wasn’t because of the actors or the storyline that made me want to see it.  It was the name and the fact that it coincided with the name of my favorite place at Disneyland and Walt Disney World - Tomorrowland - a place where my dreams of the future were realized.  I was into space, rockets, and science as a kid and this was a place that embodied it.  So, the opportunity to glimpse how someone would imagine that place, inhabited, was the draw.  And so I went…

Well, before I went, I read the prequel to the movie - Before Tomorrowland and I enjoyed it.  It was a simple read, the storyline was simple too - but it tugged a string inside of me.  I’m not going to spoil either the movie or the book for you, but suffice to say, there are people in both stories that haven’t given up.  But it wasn’t just the book and movie that affected me, it was the movie reviews - both negative and positive.  While there were the standard comments about acting, cinematography, plot and the like - what caught my attention were reactions to the “message” of the movie.  Some key words:

  • Preachy
  • Dull
  • Wishful thinking
  • Hope for humanity

Or my favorite quote from Rolling Stone - “No cynicism, no snark!  What!  In box-office terms, that usually translates into no chance. Yes Brad Bird’s Tomorrowland, a noble failure about trying to succeed, is written and directed with such open-hearted optimism that you cheer it on even as it stumbles.” - http://www.rollingstone.com/movies/reviews/tomorrowland-20150520

The story arc in the movie is that we (all of us) have pretty much given up.  We look at the world around us and have finally given in to entropy, assumed that there is nothing we can do.  Ouch!  But there were the reviews - some of them reflecting the cynicism, many snarky.  There were those that pulled their own beliefs around global warming into their reviews, claiming that the movie was a shill for those promoting it as an issue (note - there is evidence of flooding in the movie and perhaps global warming was in the lines), but they missed the point of the movie - that we can do something about it, that we can do something about pretty much anything if we just work together to accomplish it.

To give you context, I’m going to tell you more about me.  I grew up in Los Angeles (a city in California which is a state in the United States).  My dad was a rocket scientist - ok, really he was a physicist, but he worked on systems that few aboard rockets (among other things).  My formative years were in the 1970s and 19080s.  Back then, we had landed on the moon, sent probes around other planets, and atomic energy still had some promise.  Perhaps not as grand as the idealized view we now have of promise in the States from the 1950s, people had not given up yet.  The thought was that science could still affect positive change.

Flash forward to today…  We have a movie that actually uses the loss of that vision as a plot hook.  It exists because it is something we can all relate to.  It is a plot that wouldn’t have worked back in 1950, back then folks were actually considering using nuclear bombs for construction purposes and to deal with hurricanes, taraforming other planets, and other things that we now look at with some scoff.  Hope, wishful thoughts have been replaced by cynicism and snark.

And so I ask you - have you given up?  I did!  I was filled with these emotions.  I was tired of hearing how change could happen for it couldn’t could it?...

I won’t shill for Disney and suggest you go see the movie (I did like it though), but I would ask that you take a moment and think.  Have you slipped into the view of cynicism?  Do you find those around you who may still be fighting to improve things, preachy?  Why is that?

.

.

.

Ok - so how does this relate to evaluation?  What?  You didn’t think I wouldn’t talk about evaluation this time did you?  I am the Evangelist after all!  ;-)

My mentor and advisor at Claremont Graduate University, Stewart Donaldson, has written on Evaluation Anxiety and practicing evaluators often deal with the issue.  In essence, those who are potential affected by the evaluation become anxious and fearful - they fear their program will be cut, their positions cut, their support cut.  I see much of this tied to the cynicism I remarked about above.  Someone has given evaluators a bad name - look to my previous blog post addressing it if you would like to see where the fingers might point - but many look at evaluators as either there to demonstrate their product is good (marketing) or to do the exact opposite.  This focus on value has been present for some time.  However, evaluators aren’t interested in demonstrating or destroying, they are interested in helping - or should be.

And so, I specifically ask the evaluators reading this - have you succumbed to cynicism?  Are you really trying to help or just providing value statements or providing more than that?

How about you consumers of evaluation?  What are you using your evaluation-based information for?

Perhaps I’ve turned too preachy as well?  ;-)

As always, I’m open to your comments, suggestions, questions, and perhaps dialogue.  Please feel free to post in the comments section.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Thursday, June 11, 2015

Collaboratives and Capacity

As an evaluator, I get to see a lot of things. One of the many things I’ve learned is that quality programming is tied to the capacity of the organization to deliver that programming. TCC Group, the company that now employs me, has an instrument called the CCAT (http://www.tccccat.com) that helps organizations assess themselves along four core capacities - adaptive, leadership, management, and technical. If you are interested, you are more than welcome to have a look.

However, I’m not writing to just speak to individual organizational capacities, but issues of capacity within collaboratives. One of the “strengths” of a collaborative is that individual organizations can share resources - the old adage, “the rising tide floats all boats”. The statement sounds trite and to some degree it is. Just because there is some “extra” capacity in the community or collaborative, it doesn’t mean that it is recognized as such, much as less shared.

In the tenets of Collective Impact is the concept of a “Backbone Organization”. The job of this organization is to support the collaborative efforts of the organizations involved. Taken from a view of capacity, the Backbone Organization would be the one to identify the capacity needs and work within the collaborative (and outside as necessary), to find the resources to fill those needs. In my last role, I developed assessments of capacity for the organizations building collaboratives across many communities across the United States. I wish we would have used the tool here at TCC with some modifications - as it would have saved a great amount of time (but I digress).

While having a collective provides opportunity for finding untapped resources, there are more complications. In my own work, I’ve considered the following along with other issues:

  • What are the needed resources? Is it access to Human Resources support, staffing, equipment?
  • Where are the needed resources? Are they reasonably close that the organization needing them can use them? Are they at the level needed?
  • What efforts are necessary to ensure the resources are usable? Having access to internet bandwidth might be needed, but is there also a need for Information Technology support as well to ensure the bandwidth is usable?
  • What are the relationships of the organizations? Are they able to collaborate together beyond working on a similar goal?
  • Are issues of over-provision of services in the community and additional service deserts?

Some tools I’ve found to be of help are GIS Mapping and Network Analysis.

GIS mapping allows for looking at the big picture. It doesn’t hurt to know about where services are being delivered and where there is unnecessary overlap. Simply adjusting service areas can provide needed support to the community without any additional resources. It also allows the Backbone to map out capacity needs and resources, to look at feasibility of shared support or needs for specific organizations. That van isn’t all that useful if it is owned and used by another organization that is clear across town.

Network Analysis not only can help in mapping out and understanding relationships between organizations, but it can also be used to assess the sharing of resources. Over time, you can assess shifts in flow of resources as well as keep track of needed capacities.

This brings me around to my push for Evaluation as an Intervention. To be effective, the backbone of any collaborative or for that matter, the leadership of an organization really need to understand their capacities and how they can be brought to bear. The results of a capacity evaluation done in a repeated fashion allow these leaders to make the necessary changes (and identify where those changes can be found within their system).

As always, I’m open to your comments, suggestions, and questions. Please fell free to post in the comments section.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Tuesday, June 2, 2015

Evaluation as an Intervention

Today is the day.  Many of you know that I’ve been hinting at this for years, but today is the day that I make my own mark in the sand and tell you about my own “Theory of Evaluation”.  Please note, that much like other theories, mine is an evolution, not a revolution.  I don’t think I’m going to say anything here that is too controversial - unless you really subscribe to a certain viewpoint (which we will get to later), but I thought it was time.

Evaluation as an Intervention

A few blog posts back, I walked you through my growth as an evaluator - starting with a class in organizational development and working to try to understand collaboratives.  There, I introduced you to the idea of evaluation as an intervention - speaking about the engagement of evaluators in program design and use of evaluation data to inform evolution of programs.  Today, I want to make that case stronger and speak more about why I think it is important.

Evaluation is dead?  I don’t think it is, just evolving.

In April of 2014, Andrew Means posted on the page, Markets for Good a controversial post - “The Death of Evaluation” (www.marketsforgood.org/the-death-of-evaluation/).  In it, he takes to task “traditional evaluation” and makes some rather bold statements:
  • Traditional program evaluation is reflective, not predictive
  • Program evaluation actually undermines efforts to improve
  • Program evaluation was built on the idea that data is scarce and expensive to collect
The posting created a bit of a buzz in the nonprofit world as well as the evaluation world.  I heard a great deal of grumbling from fellow evaluators - “how dare he declare this!?!?!?!”  On the nonprofit side, I heard some quiet, “amen”s.  Neither surprised me.  Nor did it surprise me when I talked with a few funders and heard their thoughts - frankly they were as mixed as the nonprofits.  You see, there are still organizations out there, that unlike Mr. Meers, still want to prove (or at least in a more generous viewpoint, find out if) their program worked.  We’ll get to that in a bit, but more interesting is the trend of organizations that want to learn and move forward.  

To fail forward, you need to know why you failed.

In the world of Collective Impact, StriveTogether has promoted “failing forward”.  The only way to fail forward is to had information that the failure occurred in the first place as well as information as to why the failure occurred.  Frankly, in the areas of manufacturing and healthcare, folks are already way ahead of the nonprofit world - concepts such as ISO standards that ensure consistency of work and Six Sigma for failure assessment have been around for quite a while.  We also hear about performance improvement and quality improvement from these areas as well as others.  They have learned that a successful organization needs to be moving forward and improving to be successful.  Incidentally, they all do a great job of marketing around these concepts as well.

Tradition isn’t all it’s cracked up to be.

So, why does “traditional program evaluation” exist?  Why do evaluations that assess over a longer time with the program held constant (or perhaps affected by the environment) still get done?  I can think of a number of reasons, some perhaps good, some not, depending on your values.  Because frankly, it is the purchaser and user’s values that should be affecting what evaluation gets done.  Here are some of those reasons:
  • Intent to generalize the program to other environments - here people just want to know how the program works
  • Desire to market the program to others or to the current funder - the interest is in sales of the program, not changing it
  • Accountability focus - did the program do what it said it was going to do?
“Traditional evaluation” as Mr. Meers calls it, isn’t focused on learning - or at least incremental learning.  It is focused on the “big picture” - did the program do what we thought it was going to do?  As such, it does have its place.  The question is whether that “big picture” is valued or something else.

Programs evolve.  Why can’t evaluations evolve as well?

In my own thoughts on this, I came to an interesting observation.  Programs do not stay put for long.  They tend to evolve.  So, the “big picture”, long-term evaluation project becomes more and more difficult to conduct, because the maintaining a program as it was designed over years becomes more and more difficult to do as well.  While possible, it just isn’t likely that a program will stay exactly the same.  This is one of the reasons why my cousins who focus just on social science research view evaluation as something “dirty”.  They see it as having less control (the friend of research) of variables and as such, the work isn’t as pure.  It is also why researchers try to boost the rigor of an evaluation project - to try to clean up the dirt as much as they can.

But social programs and innovation specifically is very dirty.  There are changes all the time and Mr. Meers, because of his focus, doesn’t value the purity - he values a program that produces results and craves better results (sorry for putting words in your mouth Mr. Meers - I hope I’m right).  And so, I would argue it is time to seriously consider evaluation as an intervention.

Evaluation can’t be predictive, but it can support efforts to improve and by the way, it can use much of the data that is sitting around unrecognized and thus unused.  The trick is to embrace the dirt - such things as local context, environment, and change.  The cycle of collection and reporting also might shift a bit - depending not on the length of a grant or program goals, but more around expectations for change for shorter-term outcomes.  These “performance measures” and the hypotheses associated with them (yes, we are still taking science here) are the information that constitute the intervention.  They are often directly tied to service delivery or at least more closely associated with it.  And, they are meant to be shared with and used by decision makers as things go.  The real intervention occurs where the information is used and the stakeholders adjust the programming.  Another key hallmark of this work is that measures also do not stay constant over time.

Evaluate the system and program as it is, not as it “should be”.

Did I just surprise you?  I think I did!  I actually promoted the idea of dropping and/or changing measures over the duration of the evaluative work!  This isn’t how most evaluation work is done.  Michael Quinn Patton pokes a bit at this with Developmental Evaluation, but I’m saying that no measure is sacred - all can go.  Remember, the goal of this work isn’t to test a program’s impact, or generalize, or for that matter look for accountability - it is to improve the program.  As certain criteria are met and the program evolves, it is quite possible that some, if not all measures, will be replaced over the duration of the evaluation.  Measures that were meaningful at the beginning, become less meaningful.  This addresses Mr. Meers' concerns around evidence of performance and success - if (and it is a very big if) the stakeholders are still truly interested in improvement.  It isn’t the measures or measurer that determines whether a leader is content with their work - rather they only provide evidence for that leader to make an informed decision.  However, if the evaluation is designed to run on rails and get to a specific destination (and only that destination), pack your bags, for that is where it will wind up.

There are some rules to this.

And so, as we think of Evaluation as an Intervention, we have a few “rules” to consider (think of these as decision marks).  Evaluation as an Intervention:
  • Is intended for improvement of programs and systems
  • Is intended for learning
  • Isn’t intended for accountability assessment
  • Isn’t intended for those specifically interested in generalizability of their program
  • Isn’t intended for “big picture” assessments or marketing
  • Can be inexpensive (we will talk more about that in later blog posts)
Is this for everyone?  No, only if you need it.

As for Mr. Meers, I appreciate his view.  It is shared by many, but not by all and it is important to understand the needs.  It boils down to what information is important and to whom.  In the case of Evaluation as an Intervention, it is just that, recognizing and using evaluation to improve a program or system - not just document it.  And, as with all theories of evaluation, there are moments with one theory is more useful than another.  In my own practice, I engage multiple theories to support my work - I just felt that it was time to really define this one and give it a name.

I’ll be writing about this more in the near future as I work through this theory more.  As always, I’m open to your comments, suggestions and questions.  Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Friday, May 15, 2015

What can we learn from a salad?

What constitutes a salad?  What is necessary for something to be a salad?

This was a question my family was kicking around last week - trying to define a salad.  Yes, we have interesting conversations in the Evangelist household.

Merriam-Webster defines a salad as “a mixture of raw green vegetables (such as different types of lettuce) usually combined with other raw vegetables” or “a mixture of small pieces of raw or cooked food (such as pasta, meat, fruit, eggs, or vegetables) combined usually with a dressing and served cold."

But wait!  That could also describe a cold soup!  And, I have had a “hot salad” before as well.  The definition isn’t complete.

We finally came to the conclusion that a salad is like a chair - we are pretty sure we know what typically each is made of, but really it can be multiple things.

How does this relate to evaluation?...  I’m so glad you asked!

One of the key role as an evaluator is describing, or better said, defining the program, project, initiative, system that we are evaluating.  Some are easy.  We know that a pencil is some sort of writing implement with a soft, marking material that is usually graphite (previously lead) surrounded by a material, usually wood.  Well - perhaps even that varies.

What can we learn from these analogies?  That even something “easy” still has a great deal of variability when we wish to define, especially when we generalize.

So, as an evaluator, I can describe what I see.  My salad consisted of a collection of green leaves (of multiple shades of green and shapes - some with stems), grated provel cheese (see here - http://en.wikipedia.org/wiki/Provel_cheese), olives, and a creamy Italian dressing.  It was delicious (ok, so I do some valuing as an evaluator as well).  I can define this salad in various levels of detail, but it does not do to generalize well to other salads.  As we ticked off a list of salads, we at first thought we had something generalizable - some sort of sauce.  It can be found in potato salad, hot spinach salad, even pasta salad.  But…  There are other things that come with sauce - my boiled ravioli with meat sauce for instance (and yes, it too was delicious).

So, the problem was with generalizability.  I can do a great job evaluating the salad and for that matter the rest of the meal.  I can describe it in various levels of detail. I can provide a value judgement on the salad - in fact, I did so above.  But, I can’t generalize the salad to other restaurants.  

But that isn’t quite true is it?  Granted, provel cheese is not normally found in other cities other than St. Louis - but a different cheese can be substituted.  Further, go to a Greek restaurant, and you might get the olives, but the sauce might be different.  And so it goes.  As we move away from an Italian restaurant to other types of restaurants, moving farther geographically, the contents of a salad change.

There are two things we can learn from this.

First - if you ask the chef at the restaurant where I dined (and I did), he most assuredly will tell you that he didn’t make the salad (design and implementation) to be generalizable.  He wants people to come to his restaurant for his salad - not to have people replicate it elsewhere and steal his business.

Second - that context matters.  The culture of the restaurant defines a significant amount of the salad.  I’m not getting a salad like what I described in a Chinese restaurant.  I might get closer in a Greek or French restaurant.

So - you know how everything ties back to evaluation for me.  Let’s explore those two learnings from a programmatic frame.

We have a program developed by an organization (the salad) that frankly, the organization wants to make as unique as possible to differentiate themselves from other organizations’ programs.  The program is designed for a specific culture and environment and the designer isn’t interested in other applying it to their settings.  As an evaluator (funders, you should think about this too), we are going to learn a great deal about the program (salad), we can explore it in depth - but generalizing it to other environments is going to be highly challenging and depending on how different the context/culture you are exporting it to - require such modifications that the new program (salad) would be impossible to tie back to the original design.

Is the cause lost?  No, we just need to pay attention to the context and culture.  There are things in my Italian salad that would work in a Greek salad.  Green leafy vegetables are enjoyed by both pallets.  Olives work as well.  Even the dressing is similar in constituent components.  As the culture differs more significantly, the capacity for generalizability degrades.  The green leafy vegetables might be present in my Japanese salad, but olives are probably not going to be present and the dressing will be significantly different.  Even the container and method by which I consume the salad may be different.

As evaluators, we are often asked to pull out these important lesson for our clients.  In the case of a program that is built and designed for a specific context and culture (frankly, I would say most are) - we need to know and understand how the context and culture affected the program design and implementation.  What tools are present? ( Eating with a fork or chopsticks?  Is cutting the greens at the table ok or is a knife even present?)  Miss these and you are going to advice an organization incorrectly.

So, we must pay attention to the context and culture (environment) of the program, project, system - but we must also understand, if there is an interest in generalizability, the environment to which we wish to port the program to determine what modifications might be necessary.

I’ve been on the soapbox for a number of years that evaluators should be involved in program design - here is a great example of where they can be most helpful.  Often, they are engaged to do some sort of summative evaluation with the thought of taking those learnings and applying them more generally.  But, there is a disconnect that often occurs.  The evaluation is completed on the original program.  The report is created with little thought beyond description and value assessments of the program.  And the funding organization for the evaluation takes those findings and designs and implements something.  Often, the evaluator only knew it was a summative evaluation and did the job - they may even know the purpose, but I’m going to ask a question here…  How many evaluators take it the step farther to ask - “where are you planning on generalizing this program to?”  How many take it the step farther to incorporate an assessment of context and culture for the new environments?  Granted, those steps are often not funded or even considered (most don’t know where they plan to implement next).  But, by keeping the evaluator involved in the program design for the generalized version - they can serve as the critical friend to talk about context, to bring in key people in the communities to share their thoughts - to test what is going to work in the new environments.

As a result, you will wind up with less discordant programs.  Ever see a pizza served in a Chinese restaurant?  These occasionally find their way to kids menus.  I wonder how often they are bought and eaten?

As evaluators and consumers of evaluation, I’m curious to hear your own thoughts on this.  Do you think of these things when you are considering evaluations?  Have you run into programs, projects, systems that are so tailored to a certain environment that the generalizability would be extremely difficult?  Have you a definition for salad that addresses all the possible combinations - including pasta, meat, hot, and fruit salads?  Are we asking too much to attempt to define beyond what we see, to create artificial categories/structures to pin programs to?  If we reject those, how do we learn and share outside contexts?

As always, I’m open to your comments, suggestions, and questions.  Please feel free to post comments.


Best regards,

Charles Gasper

The Evaluation Evangelist

Friday, May 1, 2015

Evaluating Collaboratives - Exploring the Symphonic Metaphor

In my previous Blog, I mentioned that I would be visiting the symphonic metaphor again in the future.  Well, welcome to the future!...

At the time of my writing this, we still don’t have flying cars or jetpacks.  What we do have is a focus on collaboration of multiple sectors to affect positive change in communities.  There are many brands for this type of work, but in reality, it is just organizations of many types (nonprofit, business, civic, etc) and individuals (concerned citizens, elected officials, etc) coming together to try to solve an issue.  To affect this work, there are many steps - and of course, evaluation can offer support to each step.

 

Identifying and Agreeing Upon an Issue

To get there is no easy task.  There are many steps and much can get in the way.  The first issue is identifying what is important.  I’ve a bit of experience with this and you would be surprised at how difficult it is to come to an agreement about what constitutes a community issue.  While not considered a specific evaluative domain by many people (how often have I heard, “there’s nothing to measure yet, we don’t need you), many of the skills evaluators engage can be of use.  Some of the methods I’ve used include:

  • Visioning exercises - These are great for getting people to present issues in a positive manner and often also can be used to establish the goal(s) of the collaborative.  Some prompts have included:
    • It’s 20 years from now and CNN, CNBC, Fox News (whomever) is talking about the major change that happened in your community, what was it?
    • You are being interviewed for the newspaper about what you accomplished, what was it?
    • You are met with a genie and given 3 wishes for your community, what are the things you wish for?
  • Service overlap mapping - This is great for starting the conversation around what people/organizations are bringing to the table.  This is like a heatmap versus a geographical map.  Here we often follow with additional questions:
    • Why are you providing the service?  (And you can’t just say there is a need.)
    • Where are there gaps on the map (service deserts)?  Why are they there?
    • What do the services have in common?

The neat thing about the two above methods is that you are attacking the problem from two different directions.  In the first case, you are just aiming for the result (impact, outcome).  In the second, you are looking at what people are doing and allowing them to weave it together into a meaningful result for the group.

Incidentally, you are also starting the set up of your program theory and evaluation framework as you are establishing the long-term outcomes they are collectively shooting for and then working backward to individual organizational outcomes and activities.

 

Identifying and Agreeing Upon What the Collaborative Is Doing (Or Will Do)

As an evaluator, you want to know what the activities are.  As a community activist, you want to know what your partners are going to do to support the cause.  This is another sticky issue as many organizations/individuals might not recognize the contributions of others as relevant or appropriate.  This is where I like to help by using the results of the previous work.  We have our agreed upon impact - we now need to agree upon what outcomes predict success.  We often rely on the organizations and individuals to provide us with their theories of impact (we can talk about this another blog post in the future).  When drawn out and discussed, the map can look something like this:

NewImage

 

 

 

 

 

 

 

 

 

A fantasy author, Michael Moorcock is the originator of the design idea - his symbol for chaos.  And it is chaos that can occur if there isn’t “alignment” of the efforts - in essence, the community’s impact goal is never achieved because everyone is pulling hard, but in different directions.  The evaluator, through the clarity of the theory of impact can help the organizations and individuals involved see can happen and with data, may be able to articulate it.  This service helps the group agree upon efforts.

 

Note of Caution

Please note, I’ve simplified this.  In reality - we are about 2 or so years into a collaborative’s work and if we are lucky, we now have agreement on what we are trying to accomplish.

 

Changes

So we have agreement on what we are trying to accomplish and we are in theory pulling in the same direction.  As part of this process, you are going to be talking about definitions and clarifying indicators of activity and outcomes.  Well - now the evaluator moves to a more traditional role, tracking activities and outcomes.

Much like any individual program, there are changes that occur.  All are often focused on the impact on the community as measured by these changes.  However, there are other impacts that seem to accompany collaborations.

  • Changes in relationship and collaboration among the partner organization and individuals
  • Individualized organizational change

When thinking about these collaborations, we really need to attend to all of these.  There are shifts that occur in capacity.  While I’m plugging the work of my organization here - the TCC Group has a fantastic paper on what we call Capacity 3.0 - http://www.tccgrp.com/pubs/capacity_building_3.php.  It speaks to how we need to build capacity thinking about the social sector ecosystem and how organizations need to understand, respond to and structure themselves to adapt to changes in the ecosystem.  Well - this informs some of my own thoughts, not just from one organization’s standpoint, but across a collaborative.  Partners need to see those changes and calibrate to collaborate effectively.  The evaluator can provide that data, if they are tracking all three change arenas (not to mention also looking at the other environmental factors).

And So On To the Symphony

As a collaboration forms, we are able to see how the symphony is a good metaphor.  Prior to the curtain going up and the conductor taking the stage, we have sounds of music.  As each instrument tunes, their individual melodies of practice float through the air.  In combination, they are sometime discordant and chaotic, but there are also moments were they seem to flow into a strange synergy.  These are those accidental combinations that can occur in the field.  But with the conductor (not the evaluator - we just are the critical friend/listeners), we can help the orchestra practice.  Issues such as:

  • Choice of music
  • Selection of instruments for the piece
  • Sheet music to follow
  • Parts for the instruments to play
  • Timing and pace of the piece

Can be addressed.  And like the orchestra, this work takes practice to improve.  The evaluator helps by providing the feedback to the conductor and the other partners in the piece - providing feedback to the key council or leadership of a collaborative and partner organizations.

As always, I’m interested in your thoughts.  Please post comments, suggestions, or questions about what I’ve shared.  I’m interested in learning from you as I share my own thoughts here.  Please feel free to post comments.

Oh - one more thing…  While I did allude to my employer - the TCC Group.  Please note that these are uniquely my thoughts and do not necessarily represent the thoughts of the organization.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Tuesday, April 14, 2015

My Return as a Blogger and My Life as an Evaluator

I was reading through my previous Blog posts over the past few years and I came across a post I made in November 17, 2010 – the title?...  Wait for it…  My Return.  There it had been over a year since my last post.  Well, once again, I’m writing about my return and once again, it has been over a year.

Changes

Today, I shared with many of you a change that occurred in my life.  I had been considering doing something like this for well over a year and finally decided it was time to switch things.  Earlier this month, I started working for the TCC Group.  TCC works with Foundations, Nonprofits, and Corporate Giving.  The institution has over 30 years of experience in the areas of strategy, grants management, capacity building and evaluation.

I was thinking it was a good idea to write about where I have been for the past year (really years) and why I haven’t written.  Then I realized – I have been working as an evaluator for over 20 years, have I ever really told you why I got into the business?

Have a seat, I’m going to share a bit here about how I got here.

Setting the Hook
In 1991 or so, I was an undergraduate student at Santa Clara University.  There, I was working with someone who would greatly influence my life, Dr. William McCormack.  One of the courses was on organizational development and we were required to work with an organization, writing a case study.  That experience drove me to think more about organizational effectiveness and impact – but it took a few more steps to put me on my path.

There was some additional evaluative work done, but in 1995 I found my love.  I was working on a project to assess the impact of a program on the quality of life of the individuals.  When the evaluation was completed and we submitted our findings, the state took our work and changed the program – improving the lives of the people served.  Thousands of people were impacted by my work!  I had found my calling.

Learning that Learning is Important
Along the way, I worked in quality management for a health system.  This experience molded my view on evaluation further, evolving it away from summative, value-focused work to learning and improvement.  Our work improved the experience and health outcomes for patients in hospitals and also resulted in organizational savings for the health system.  We weren’t conducting long-term studies, but instead focusing on short-term outcomes that predicted long-term success.  The mantra of the day for organizational change was, “what can we get done by next Tuesday?”

Movement to Large-Scale Impact and Collaboration
In 2007, I became a Director of Evaluation for a large health-focused foundation, the Missouri Foundation for Health (MFH).  It was time to think differently again – or perhaps better said, to broaden my thought.  Prior to coming to MFH, my focus was on program evaluation (even if the programs were larger).  Now it was time to see how multiple programs (yep, was still focused on programs) could interact with one another to affect larger scale impact.  During my stent at MFH, I also returned to graduate school and while my evaluative practice was informed by program theory, it truly shifted to being theory-based (I’ll talk about that in a later blog).  I started to think about systems in a broader sense, not only seeing how an individual program interacted with a larger system, but how systems change can affect improvement across broad swaths of issues.

Evaluation as an Intervention
About this time I started thinking about evaluation differently.  I recognized that programs and systems evolve over time and that evaluation can better support the effort if it doesn’t stand completely separate.  Provision of shorter-term information tied to program theory can better inform the evolution of programs and identify where the efforts are being effective.  There has been a shift in AEA in recent years, now recognizing that evaluators can be and some think should be involved in program design.  My epiphany in 2010 was my tied to my recognition that my role as an evaluator in a foundation should support such work.

Collaborative Impact
Most recently, I’ve focused on how collaboratives are built and the results of the collective effort.  I’ve been assessing how these efforts combine with and attempt to change simple and complex systems.  You can think of these as multiple programs working in concert.  In reality, the concert often sounds like a bunch of instruments tuning up versus following a score.  My work has focused on how to get the instruments to play together in the same hall, follow an agreed upon score and perhaps follow the baton of an agreed upon conductor.  (We will revisit this analogy in a later Blog post.)  There is a great deal to learn about collaboratives.  Certainly, this is something that has been done for years.  Folks have given it different brand names, but really it is just about learning how groups of people, organizations, civic leaders, and communities can come together to affect systems change.

Which Brings Us to Today
The process of moving to the TCC Group made me contemplate my practice as an evaluator.  In the interest in keeping this post relatively short, I only shared some of the revelations I made.  My journey reflects two key things, 1) my own personal growth in the field and 2) evolution in the field of social change.  To be clear, programs are still important.  Valuing the impact of those efforts is also very important.  However, sustainability of the programs and their supporting organizations, organizational and communal learning, and systems change have become more important as those attempting to affect larger scale change turn away from focusing on just their work to look at the environment around them.

And So It Goes
I hope you enjoyed reading about my evolution as an evaluator.  Much like the programs and systems we evaluate, my practice will continue to grow.  I would be very interested in learning your stories around how your engagement and understanding of evaluation has changed over time.
As always, I’m open to your comments, suggestions, questions and yes, your stories.  Please feel free to post comments.

Best regards,
Charles Gasper
The Evaluation Evangelist