Monday, August 3, 2015

Muddling Mathematical Magic

Hello readers!

One nice thing about Blogs is that you get to be retrospective.  It allows me to think a bit about my practice and experiences - which of course I share with you.  In the vein of my recent postings, I’m going to get a bit personal with this one.  In this posting, we are going to talk about how mathematics, or more importantly, the manner in which evaluative questions are asked and answered can really present different “realities”.

One of the largest challenges I’ve faced as an evaluator is electing the right evaluative questions from my clients (internal and external).  Often, the process begins with the question, “what do we want to learn or know?”  But really, there is a deeper question, what constitutes meaningful information to the stakeholders?  Finally, there is the question, “how will the information be used?”

Reading my previous posts, you will note that I have an orientation toward using evaluation for learning versus promotion or marketing.  These orientations can coexist, but often the information is presented differently and frankly can also be limited due to a focus of use - most of us don’t want to share the negative sides of our organizations and work.  More importantly, the same data/information, presented differently, can result in these different realities.  Some time ago, I worked with an organization that was more marketing focused versus learning focused.  I’m going to use one of the key measures as an example.

The organization was interested in increasing the number of people engaged in a behavior.  Now, you can look at this a number of ways, the simplest mathematically and certainly more accurate is to do a simple count.  However, those numbers can appear low.  A slightly more complex way to look at this is to speak to percent increase.  Both use the same data, but question results in a different perspective.  If we had 5 individuals engage in the behavior, depending on the scope of the project, some stakeholders might be disappointed in the result.  Let’s say that target for the intervention was 200 people, suddenly the impact might not appear so large.  However, let’s say there was originally 5 people engaging in the behavior.  We can say that there was a 100% increase in engagement.  So we have this presented two ways:

  • 5 individuals our of 200 started engaging in the behavior after the intervention.
  • There was a 100% increase in the number of individuals engaging in the behavior after the intervention.

Which sounds better to you?  The basic evaluation question is the same, it is the same data, but the measures are different.  The difference is in the framing of the question and how it is reported.  I’ll make it worse…  the report could also state:

  • 2.5% of the target group started engaging in the behavior after the intervention.

Well, maybe that doesn’t sound as bad as the count of 5 after all.  In the third example, I left out the scope (200 targeted individuals).

I’m using this example to make a point, specifically to evaluators and consumers of evaluation.  It is critical to be clear around the evaluative question and what it really tells you.  I’ll complicate things more - what if I said that the count of 5 was statistically significant?  What would you say then?  Does your opinion change or is it already tainted by what I shared above.  Does it all into question the idea of statistical significance?

But, let me offer another piece of data - what if we aren’t talking about engaging in a behavior - volunteering, but successful intervention resulted in the individual living and the others dying?  Does this change your view?  What if we are talking about a clinical trial for inoperable cancer and 5 went into remission and the remainder did not?  Does the scale of impact shift your view?

My guess is that some of you said yes to some of the questions above.  As consumers of evaluative data, we bring to the table concepts around impact and scale.  A 100% increase sounds darn good no matter what - certainly better than we doubled those engaged.  Some might ask a further question and find out the impact was only 5 out of 200.  Depending on the relative strength of impact (life and death being probably the most extreme example), you can view that result differently again.  Add in the cost of the intervention and the question of value becomes more muddled.

I have a proposal…  As evaluators and consumers of evaluation, let’s ask for clarity.  When looking at impact, let’s look at expectations and the significance of those expectations as the “comparison group”.  In reporting and reading of those reports, let’s keep those expectations in the forefront of our minds and in the text of the report.  As learners, we would want to know this and frankly, as a consumer of marketing materials, I would want these things clear as well.

As always, I’m open to your comments, suggestions, questions, and perhaps dialogue.  Please feel free to post in the comments section.

 

Best regards,

Charles Gasper

The Evaluation Evangelist