Thursday, June 11, 2015

Collaboratives and Capacity

As an evaluator, I get to see a lot of things. One of the many things I’ve learned is that quality programming is tied to the capacity of the organization to deliver that programming. TCC Group, the company that now employs me, has an instrument called the CCAT (http://www.tccccat.com) that helps organizations assess themselves along four core capacities - adaptive, leadership, management, and technical. If you are interested, you are more than welcome to have a look.

However, I’m not writing to just speak to individual organizational capacities, but issues of capacity within collaboratives. One of the “strengths” of a collaborative is that individual organizations can share resources - the old adage, “the rising tide floats all boats”. The statement sounds trite and to some degree it is. Just because there is some “extra” capacity in the community or collaborative, it doesn’t mean that it is recognized as such, much as less shared.

In the tenets of Collective Impact is the concept of a “Backbone Organization”. The job of this organization is to support the collaborative efforts of the organizations involved. Taken from a view of capacity, the Backbone Organization would be the one to identify the capacity needs and work within the collaborative (and outside as necessary), to find the resources to fill those needs. In my last role, I developed assessments of capacity for the organizations building collaboratives across many communities across the United States. I wish we would have used the tool here at TCC with some modifications - as it would have saved a great amount of time (but I digress).

While having a collective provides opportunity for finding untapped resources, there are more complications. In my own work, I’ve considered the following along with other issues:

  • What are the needed resources? Is it access to Human Resources support, staffing, equipment?
  • Where are the needed resources? Are they reasonably close that the organization needing them can use them? Are they at the level needed?
  • What efforts are necessary to ensure the resources are usable? Having access to internet bandwidth might be needed, but is there also a need for Information Technology support as well to ensure the bandwidth is usable?
  • What are the relationships of the organizations? Are they able to collaborate together beyond working on a similar goal?
  • Are issues of over-provision of services in the community and additional service deserts?

Some tools I’ve found to be of help are GIS Mapping and Network Analysis.

GIS mapping allows for looking at the big picture. It doesn’t hurt to know about where services are being delivered and where there is unnecessary overlap. Simply adjusting service areas can provide needed support to the community without any additional resources. It also allows the Backbone to map out capacity needs and resources, to look at feasibility of shared support or needs for specific organizations. That van isn’t all that useful if it is owned and used by another organization that is clear across town.

Network Analysis not only can help in mapping out and understanding relationships between organizations, but it can also be used to assess the sharing of resources. Over time, you can assess shifts in flow of resources as well as keep track of needed capacities.

This brings me around to my push for Evaluation as an Intervention. To be effective, the backbone of any collaborative or for that matter, the leadership of an organization really need to understand their capacities and how they can be brought to bear. The results of a capacity evaluation done in a repeated fashion allow these leaders to make the necessary changes (and identify where those changes can be found within their system).

As always, I’m open to your comments, suggestions, and questions. Please fell free to post in the comments section.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Tuesday, June 2, 2015

Evaluation as an Intervention

Today is the day.  Many of you know that I’ve been hinting at this for years, but today is the day that I make my own mark in the sand and tell you about my own “Theory of Evaluation”.  Please note, that much like other theories, mine is an evolution, not a revolution.  I don’t think I’m going to say anything here that is too controversial - unless you really subscribe to a certain viewpoint (which we will get to later), but I thought it was time.

Evaluation as an Intervention

A few blog posts back, I walked you through my growth as an evaluator - starting with a class in organizational development and working to try to understand collaboratives.  There, I introduced you to the idea of evaluation as an intervention - speaking about the engagement of evaluators in program design and use of evaluation data to inform evolution of programs.  Today, I want to make that case stronger and speak more about why I think it is important.

Evaluation is dead?  I don’t think it is, just evolving.

In April of 2014, Andrew Means posted on the page, Markets for Good a controversial post - “The Death of Evaluation” (www.marketsforgood.org/the-death-of-evaluation/).  In it, he takes to task “traditional evaluation” and makes some rather bold statements:
  • Traditional program evaluation is reflective, not predictive
  • Program evaluation actually undermines efforts to improve
  • Program evaluation was built on the idea that data is scarce and expensive to collect
The posting created a bit of a buzz in the nonprofit world as well as the evaluation world.  I heard a great deal of grumbling from fellow evaluators - “how dare he declare this!?!?!?!”  On the nonprofit side, I heard some quiet, “amen”s.  Neither surprised me.  Nor did it surprise me when I talked with a few funders and heard their thoughts - frankly they were as mixed as the nonprofits.  You see, there are still organizations out there, that unlike Mr. Meers, still want to prove (or at least in a more generous viewpoint, find out if) their program worked.  We’ll get to that in a bit, but more interesting is the trend of organizations that want to learn and move forward.  

To fail forward, you need to know why you failed.

In the world of Collective Impact, StriveTogether has promoted “failing forward”.  The only way to fail forward is to had information that the failure occurred in the first place as well as information as to why the failure occurred.  Frankly, in the areas of manufacturing and healthcare, folks are already way ahead of the nonprofit world - concepts such as ISO standards that ensure consistency of work and Six Sigma for failure assessment have been around for quite a while.  We also hear about performance improvement and quality improvement from these areas as well as others.  They have learned that a successful organization needs to be moving forward and improving to be successful.  Incidentally, they all do a great job of marketing around these concepts as well.

Tradition isn’t all it’s cracked up to be.

So, why does “traditional program evaluation” exist?  Why do evaluations that assess over a longer time with the program held constant (or perhaps affected by the environment) still get done?  I can think of a number of reasons, some perhaps good, some not, depending on your values.  Because frankly, it is the purchaser and user’s values that should be affecting what evaluation gets done.  Here are some of those reasons:
  • Intent to generalize the program to other environments - here people just want to know how the program works
  • Desire to market the program to others or to the current funder - the interest is in sales of the program, not changing it
  • Accountability focus - did the program do what it said it was going to do?
“Traditional evaluation” as Mr. Meers calls it, isn’t focused on learning - or at least incremental learning.  It is focused on the “big picture” - did the program do what we thought it was going to do?  As such, it does have its place.  The question is whether that “big picture” is valued or something else.

Programs evolve.  Why can’t evaluations evolve as well?

In my own thoughts on this, I came to an interesting observation.  Programs do not stay put for long.  They tend to evolve.  So, the “big picture”, long-term evaluation project becomes more and more difficult to conduct, because the maintaining a program as it was designed over years becomes more and more difficult to do as well.  While possible, it just isn’t likely that a program will stay exactly the same.  This is one of the reasons why my cousins who focus just on social science research view evaluation as something “dirty”.  They see it as having less control (the friend of research) of variables and as such, the work isn’t as pure.  It is also why researchers try to boost the rigor of an evaluation project - to try to clean up the dirt as much as they can.

But social programs and innovation specifically is very dirty.  There are changes all the time and Mr. Meers, because of his focus, doesn’t value the purity - he values a program that produces results and craves better results (sorry for putting words in your mouth Mr. Meers - I hope I’m right).  And so, I would argue it is time to seriously consider evaluation as an intervention.

Evaluation can’t be predictive, but it can support efforts to improve and by the way, it can use much of the data that is sitting around unrecognized and thus unused.  The trick is to embrace the dirt - such things as local context, environment, and change.  The cycle of collection and reporting also might shift a bit - depending not on the length of a grant or program goals, but more around expectations for change for shorter-term outcomes.  These “performance measures” and the hypotheses associated with them (yes, we are still taking science here) are the information that constitute the intervention.  They are often directly tied to service delivery or at least more closely associated with it.  And, they are meant to be shared with and used by decision makers as things go.  The real intervention occurs where the information is used and the stakeholders adjust the programming.  Another key hallmark of this work is that measures also do not stay constant over time.

Evaluate the system and program as it is, not as it “should be”.

Did I just surprise you?  I think I did!  I actually promoted the idea of dropping and/or changing measures over the duration of the evaluative work!  This isn’t how most evaluation work is done.  Michael Quinn Patton pokes a bit at this with Developmental Evaluation, but I’m saying that no measure is sacred - all can go.  Remember, the goal of this work isn’t to test a program’s impact, or generalize, or for that matter look for accountability - it is to improve the program.  As certain criteria are met and the program evolves, it is quite possible that some, if not all measures, will be replaced over the duration of the evaluation.  Measures that were meaningful at the beginning, become less meaningful.  This addresses Mr. Meers' concerns around evidence of performance and success - if (and it is a very big if) the stakeholders are still truly interested in improvement.  It isn’t the measures or measurer that determines whether a leader is content with their work - rather they only provide evidence for that leader to make an informed decision.  However, if the evaluation is designed to run on rails and get to a specific destination (and only that destination), pack your bags, for that is where it will wind up.

There are some rules to this.

And so, as we think of Evaluation as an Intervention, we have a few “rules” to consider (think of these as decision marks).  Evaluation as an Intervention:
  • Is intended for improvement of programs and systems
  • Is intended for learning
  • Isn’t intended for accountability assessment
  • Isn’t intended for those specifically interested in generalizability of their program
  • Isn’t intended for “big picture” assessments or marketing
  • Can be inexpensive (we will talk more about that in later blog posts)
Is this for everyone?  No, only if you need it.

As for Mr. Meers, I appreciate his view.  It is shared by many, but not by all and it is important to understand the needs.  It boils down to what information is important and to whom.  In the case of Evaluation as an Intervention, it is just that, recognizing and using evaluation to improve a program or system - not just document it.  And, as with all theories of evaluation, there are moments with one theory is more useful than another.  In my own practice, I engage multiple theories to support my work - I just felt that it was time to really define this one and give it a name.

I’ll be writing about this more in the near future as I work through this theory more.  As always, I’m open to your comments, suggestions and questions.  Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist