Friday, February 19, 2016

Elon Musk, Vision and Your Organization

The life of a consultant means you get to travel places and see things you might otherwise not see.  This morning I was in Miami.  As I looked out my hotel window, I saw a Metromover go by and being in Florida, had thoughts of Walt Disney and his dream for transportation in his original vision of EPCOT.  This got me thinking about Elon Musk and his various ventures and promotions - SpaceX, Hyperloop, and Tesla Motors.  Should his vision come to fruition, a more accessible version of Disney’s transportation system would be available to many.  Imagine if you will stepping into your electric, autopiloted car and riding to the local Hyperloop station.  Perhaps boarding with your car, you are whisked across a significant portion of the continent to catch a sub-orbital flight across the globe or to access the space stations some of us dreamt about as kids.

I’m sure I’m ascribing some wisdom and planning to Mr. Musk, but connecting the dots (something evaluators love to do), a vision emerges.  The goal?  Transforming transportation.  I will admit that I am a fan of people that leverage their talents, ideas, and funds to affect positive change in the world.  I think that is why I enjoy my career as much as I do.  But often, I encounter organizations and leaders that have lost their vision, or perhaps said better, have been so wrapped up in the day-to-day management of their organization, programs, and products, that they loose their connection with their vision.  Worse, I’ve encountered organizations that do not have a vision.  The best indicator of an organization that has lost or never had a vision is that when you look at their programs and products, the process of connecting them together under a concept or simply just together is neigh impossible.

This large disconnect could result in inefficiencies or ineffectiveness.  The reason why I say it could is that many organizations without vision lack measurable goals to be able to assess efficiency and effectiveness.  With no connection, at best, you can only look at each program or project, but without the interstitial tissue formed by the vision and organizational goals, it is difficult to pull them together and understand the impact of the organization.  Often, the disconnect with vision is the result of organizational or program creep, which has resulted from seeing a need.

As an evaluator, I’m often asked to measure impact and I find that we are spending time building these linkages.  In some cases, an implicit vision emerges.  However, there are often orphan programs or products that do not fit the core of the organization.  These orphans tend to individually be anemic in results and at best contribute in a small way to the larger effect of the organization.  In the business world, Apple’s Pippin is a good example of an orphan.  On the surface, it seems to be a reasonable product that is tied to the vision of the organization.  However, at the time, Apple’s vision was to be a computer company, not a digital platform and content conduit.  One could even argue that their PDA product, Newton was another orphan.  But, when Apple’s vision and associated organizational goals changed, similar products, the iPod, the iPad, and iPhone all connect well.

As an organizational leader, do you know your organization’s vision?  Is it your vision?  Do your programs and products tie to this vision?  If you are considering organizational impact, they need to tie.

As always, I welcome your thoughts and feedback.  Please leave comments with your thoughts.

Warm regards,

The Evaluation Evangelist

Tuesday, January 5, 2016

The Last Salute

As some of you may know, my dad passed away in November.  He did some great things throughout his life and while I would love to spend some serious time here eulogizing for him, it really wasn’t his style (and this is more of an evaluation blog than anything else).  He tended to work in the background in more of a support role, embodying the servant leader versus leading the charge.  Some might thus find it strange that my dad retired as a Commander in the United States Naval Reserves back in the 1980s.

I have vague memories of his service, his being away from our family for extended periods of time, of him serving as the commanding officer for at least one reserve unit.  But, I also have strong memories of a man that took interest in the things and people entrusted into his care.  He worked hard to make things better for these people and the things he worked on and while he would scratch his head in wonder, it was his example that drove me to evaluation and to this day affects how I approach it.

There is more I can say on my dad’s influence on me, both as a person and as a professional evaluator, but that isn’t the purpose of this post.

Many American veterans receive a special ceremony at their grave site.  My father was no different.  During this ceremony, members of the United States Navy rendered honors to my dad, first saluting him as he arrived at the grave site, playing taps, and toward the end of our experience, carefully removing the flag that draped over his casket and folding it for presentation to me.  What happened next moved me and made me think in retrospect about being an evaluator.

I held my dad’s flag in my hands, having had them presented to me by a Chief Petty Officer (CPO).  He then stood at attention and slowly, deliberately, rendered my dad his last salute.  I had seen men and women salute my dad through his career – often saluting the uniform, but not the man.  This one was qualitatively different.  There was meaning and emotion behind that salute.  There was power behind that salute.  It was goodbye to the man, not a simply following of a ritual from page 10 of a manual.

As evaluators, we are asked to do many things.  It is easy to just collect data, analyze it and report it without putting something of ourselves into it.  The rituals of evaluation are nearly the same, what can change is how we engage in those rituals.  As an evaluator, I’m proud to say that I don’t just salute the uniform.  Rather, I recognize that I am part of the system and that my work will have an impact and has meaning for the stakeholders.   As evaluators, we need to recognize that we serve a purpose that positions us as different than researchers.  We need to recognize that our work can, and often does, result in change.  Our mere presence, much as the men who stood at attention at my dad’s grave affects the programs, the organizations and systems we support as evaluators.

By accepting the fact that our work has an impact, doesn’t it make sense that we embrace the notion?  The professor that taught my first evaluation course so many years ago said, “evaluation isn’t like research.”  At the time, he was referencing issues of control groups and the like, but I think of it differently.  Evaluation isn’t like research because evaluation has evolved into something different.  Research is passive - focusing on learning a specific nugget of information without changing a thing.  Evaluation is active - looking to provide information that improves and changes what it touches.  As evaluators, we need to embrace that role, much as the CPO did for my dad.

Good bye dad.  Thank you for all you taught me.  And thank you to the CPO, whom never met my dad in life, you reasserted for me what it means to provide a true and meaningful service.  Your reminded me that as a person, I can affect others and as an evaluator, I can do so much more.

As always, I welcome your thoughts and feedback.  Please leave comments with your thoughts.

Warm regards,

The Evaluation Evangelist

Saturday, November 14, 2015

Pondering this week's events

Earth Image

Dear reader,

As you know, I’ve kept this blog going at an infrequent basis.  I had thought I would write on a bi-weekly basis on topics of interest, but allowing the system to develop on its own, am finding that I write when the inspiration strikes.  I’m embracing this position with the thought that I will only write when I think I have something useful/insightful/thoughtful.  I hope you find my thoughts today to fit at least one of those criteria.

For those of you who follow me on Twitter or are friends on Facebook, you are no doubt aware that I’ve been attending the 2015 American Evaluation Association conference.  Additionally, you have probably heard/seen/experienced the events in Paris.  Today was the last day of our conference and I’m now sitting in an airport lounge, waiting to travel back home.  I bring you these thoughts as I see a river of humanity flow by.  As an evaluator, it is my job to help individuals, organizations, and communities work to make the world a better place.  My efforts go to help these people articulate what they are trying to accomplish, how what they are doing should affect these changes, and to aid them in seeing if all this in fact actually happens.

In our recent conference, I was exposed to others, bent on similar missions - working with different people, engaging in different methods - but still all driving to help.  I watch the river of people flow by and I wonder, how many of them have similar thoughts, similar feelings, are personally and professionally dedicated to helping others.  Oh, and incidentally, I think you can be in just about any profession and have this orientation.

Today, I learned of the scale of horror that was perpetrated in Paris.  These individuals have lost sight of the notion of help, but choose to act to push their own position/opinion.  It isn’t a question of values, but of power and desire to exert it.  We see this not only this week in Paris, but in other parts of the world.  The dialogue around these things is only divisive, with polar viewpoints having the resources to broadcast through their actions and voice around the world.  There is no solution with these actions, only a flexing of power and perpetuation of status quo.

Today, I saw the public’s response to the horror that was perpetrated in Paris.  i saw people come together for moments of silence.  People changing their social media profiles.  People expressing their outrage and grief - perhaps as I am doing now.  But, beyond them, today I saw a different group of people.  A group of people I identify with - the helpers.  Mind you, the people at this most recent conference aren’t the only members of this group, but represent similar values and focus.

  • They are aware there are issues.
  • They are aware of who these issues affect.
  • They are aware of who can help address these issues.
  • They are concerned by the issues.
  • They are actively working with others to find the solutions.
  • They are actively working with others to affect these solutions.
  • They are actively working to improve upon those solutions.
  • They are actively working to ensure those solutions are sustained.

Many of these people, much as I, have expressed their grief at the most recent events in Paris as well as other events around the world.  Many of these events are not sudden, galvanizing events, but rather have been smoldering and simmering for generations.  As such, many of these things are not visible to much of the public.

Today, it is so easy to express grief or solidarity.  Change a photo on Facebook, like something, tweet or blog.  It is much more difficult to actually decide that you are going to be a helper.  We all affect change in our communities by simply existing within them.  A helper consciously chooses to do something because they think it might make a difference.  I’m suggesting to you, dear reader, that we can all be helpers.  In my case, as an evaluator, I do much of what I bulleted above.  I can do more and the events of this week (conference and what has occurred in the world) have reminded me of this.

I would ask that you consider whether you are a helper too.  Do you thoughts about your local and global communities end with a change in your Facebook status or a retweet, or are you doing something more?  Are you part of the polarizing rhetoric, driving any compromise or understanding away to focus only on an ideal or vision, or are you willing to step up and listen to all sides of the story, roll up your sleeves, and try to help?

As always, I welcome your thoughts and feedback.  Please leave comments with your thoughts.

Warm regards,

The Evaluation Evangelist

Monday, August 3, 2015

Muddling Mathematical Magic

Hello readers!

One nice thing about Blogs is that you get to be retrospective.  It allows me to think a bit about my practice and experiences - which of course I share with you.  In the vein of my recent postings, I’m going to get a bit personal with this one.  In this posting, we are going to talk about how mathematics, or more importantly, the manner in which evaluative questions are asked and answered can really present different “realities”.

One of the largest challenges I’ve faced as an evaluator is electing the right evaluative questions from my clients (internal and external).  Often, the process begins with the question, “what do we want to learn or know?”  But really, there is a deeper question, what constitutes meaningful information to the stakeholders?  Finally, there is the question, “how will the information be used?”

Reading my previous posts, you will note that I have an orientation toward using evaluation for learning versus promotion or marketing.  These orientations can coexist, but often the information is presented differently and frankly can also be limited due to a focus of use - most of us don’t want to share the negative sides of our organizations and work.  More importantly, the same data/information, presented differently, can result in these different realities.  Some time ago, I worked with an organization that was more marketing focused versus learning focused.  I’m going to use one of the key measures as an example.

The organization was interested in increasing the number of people engaged in a behavior.  Now, you can look at this a number of ways, the simplest mathematically and certainly more accurate is to do a simple count.  However, those numbers can appear low.  A slightly more complex way to look at this is to speak to percent increase.  Both use the same data, but question results in a different perspective.  If we had 5 individuals engage in the behavior, depending on the scope of the project, some stakeholders might be disappointed in the result.  Let’s say that target for the intervention was 200 people, suddenly the impact might not appear so large.  However, let’s say there was originally 5 people engaging in the behavior.  We can say that there was a 100% increase in engagement.  So we have this presented two ways:

  • 5 individuals our of 200 started engaging in the behavior after the intervention.
  • There was a 100% increase in the number of individuals engaging in the behavior after the intervention.

Which sounds better to you?  The basic evaluation question is the same, it is the same data, but the measures are different.  The difference is in the framing of the question and how it is reported.  I’ll make it worse…  the report could also state:

  • 2.5% of the target group started engaging in the behavior after the intervention.

Well, maybe that doesn’t sound as bad as the count of 5 after all.  In the third example, I left out the scope (200 targeted individuals).

I’m using this example to make a point, specifically to evaluators and consumers of evaluation.  It is critical to be clear around the evaluative question and what it really tells you.  I’ll complicate things more - what if I said that the count of 5 was statistically significant?  What would you say then?  Does your opinion change or is it already tainted by what I shared above.  Does it all into question the idea of statistical significance?

But, let me offer another piece of data - what if we aren’t talking about engaging in a behavior - volunteering, but successful intervention resulted in the individual living and the others dying?  Does this change your view?  What if we are talking about a clinical trial for inoperable cancer and 5 went into remission and the remainder did not?  Does the scale of impact shift your view?

My guess is that some of you said yes to some of the questions above.  As consumers of evaluative data, we bring to the table concepts around impact and scale.  A 100% increase sounds darn good no matter what - certainly better than we doubled those engaged.  Some might ask a further question and find out the impact was only 5 out of 200.  Depending on the relative strength of impact (life and death being probably the most extreme example), you can view that result differently again.  Add in the cost of the intervention and the question of value becomes more muddled.

I have a proposal…  As evaluators and consumers of evaluation, let’s ask for clarity.  When looking at impact, let’s look at expectations and the significance of those expectations as the “comparison group”.  In reporting and reading of those reports, let’s keep those expectations in the forefront of our minds and in the text of the report.  As learners, we would want to know this and frankly, as a consumer of marketing materials, I would want these things clear as well.

As always, I’m open to your comments, suggestions, questions, and perhaps dialogue.  Please feel free to post in the comments section.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, July 8, 2015

Have You Given Up?

Ok folks - I’m going to make a confession.  I gave up.  Yes, you heard it here first, I listened to the critics and tossed my hands up in the air and gave up.  And then a few things happened in my life to change that - I got a dog, changed jobs, among other things.  The most recent thing to affect me was of all things, a movie.  What many of you don’t know is that I’m not so secretly a fan of Disney and when the movie Tomorrowland was announced - well, I had to see it.  It wasn’t because of the actors or the storyline that made me want to see it.  It was the name and the fact that it coincided with the name of my favorite place at Disneyland and Walt Disney World - Tomorrowland - a place where my dreams of the future were realized.  I was into space, rockets, and science as a kid and this was a place that embodied it.  So, the opportunity to glimpse how someone would imagine that place, inhabited, was the draw.  And so I went…

Well, before I went, I read the prequel to the movie - Before Tomorrowland and I enjoyed it.  It was a simple read, the storyline was simple too - but it tugged a string inside of me.  I’m not going to spoil either the movie or the book for you, but suffice to say, there are people in both stories that haven’t given up.  But it wasn’t just the book and movie that affected me, it was the movie reviews - both negative and positive.  While there were the standard comments about acting, cinematography, plot and the like - what caught my attention were reactions to the “message” of the movie.  Some key words:

  • Preachy
  • Dull
  • Wishful thinking
  • Hope for humanity

Or my favorite quote from Rolling Stone - “No cynicism, no snark!  What!  In box-office terms, that usually translates into no chance. Yes Brad Bird’s Tomorrowland, a noble failure about trying to succeed, is written and directed with such open-hearted optimism that you cheer it on even as it stumbles.” - http://www.rollingstone.com/movies/reviews/tomorrowland-20150520

The story arc in the movie is that we (all of us) have pretty much given up.  We look at the world around us and have finally given in to entropy, assumed that there is nothing we can do.  Ouch!  But there were the reviews - some of them reflecting the cynicism, many snarky.  There were those that pulled their own beliefs around global warming into their reviews, claiming that the movie was a shill for those promoting it as an issue (note - there is evidence of flooding in the movie and perhaps global warming was in the lines), but they missed the point of the movie - that we can do something about it, that we can do something about pretty much anything if we just work together to accomplish it.

To give you context, I’m going to tell you more about me.  I grew up in Los Angeles (a city in California which is a state in the United States).  My dad was a rocket scientist - ok, really he was a physicist, but he worked on systems that few aboard rockets (among other things).  My formative years were in the 1970s and 19080s.  Back then, we had landed on the moon, sent probes around other planets, and atomic energy still had some promise.  Perhaps not as grand as the idealized view we now have of promise in the States from the 1950s, people had not given up yet.  The thought was that science could still affect positive change.

Flash forward to today…  We have a movie that actually uses the loss of that vision as a plot hook.  It exists because it is something we can all relate to.  It is a plot that wouldn’t have worked back in 1950, back then folks were actually considering using nuclear bombs for construction purposes and to deal with hurricanes, taraforming other planets, and other things that we now look at with some scoff.  Hope, wishful thoughts have been replaced by cynicism and snark.

And so I ask you - have you given up?  I did!  I was filled with these emotions.  I was tired of hearing how change could happen for it couldn’t could it?...

I won’t shill for Disney and suggest you go see the movie (I did like it though), but I would ask that you take a moment and think.  Have you slipped into the view of cynicism?  Do you find those around you who may still be fighting to improve things, preachy?  Why is that?

.

.

.

Ok - so how does this relate to evaluation?  What?  You didn’t think I wouldn’t talk about evaluation this time did you?  I am the Evangelist after all!  ;-)

My mentor and advisor at Claremont Graduate University, Stewart Donaldson, has written on Evaluation Anxiety and practicing evaluators often deal with the issue.  In essence, those who are potential affected by the evaluation become anxious and fearful - they fear their program will be cut, their positions cut, their support cut.  I see much of this tied to the cynicism I remarked about above.  Someone has given evaluators a bad name - look to my previous blog post addressing it if you would like to see where the fingers might point - but many look at evaluators as either there to demonstrate their product is good (marketing) or to do the exact opposite.  This focus on value has been present for some time.  However, evaluators aren’t interested in demonstrating or destroying, they are interested in helping - or should be.

And so, I specifically ask the evaluators reading this - have you succumbed to cynicism?  Are you really trying to help or just providing value statements or providing more than that?

How about you consumers of evaluation?  What are you using your evaluation-based information for?

Perhaps I’ve turned too preachy as well?  ;-)

As always, I’m open to your comments, suggestions, questions, and perhaps dialogue.  Please feel free to post in the comments section.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Thursday, June 11, 2015

Collaboratives and Capacity

As an evaluator, I get to see a lot of things. One of the many things I’ve learned is that quality programming is tied to the capacity of the organization to deliver that programming. TCC Group, the company that now employs me, has an instrument called the CCAT (http://www.tccccat.com) that helps organizations assess themselves along four core capacities - adaptive, leadership, management, and technical. If you are interested, you are more than welcome to have a look.

However, I’m not writing to just speak to individual organizational capacities, but issues of capacity within collaboratives. One of the “strengths” of a collaborative is that individual organizations can share resources - the old adage, “the rising tide floats all boats”. The statement sounds trite and to some degree it is. Just because there is some “extra” capacity in the community or collaborative, it doesn’t mean that it is recognized as such, much as less shared.

In the tenets of Collective Impact is the concept of a “Backbone Organization”. The job of this organization is to support the collaborative efforts of the organizations involved. Taken from a view of capacity, the Backbone Organization would be the one to identify the capacity needs and work within the collaborative (and outside as necessary), to find the resources to fill those needs. In my last role, I developed assessments of capacity for the organizations building collaboratives across many communities across the United States. I wish we would have used the tool here at TCC with some modifications - as it would have saved a great amount of time (but I digress).

While having a collective provides opportunity for finding untapped resources, there are more complications. In my own work, I’ve considered the following along with other issues:

  • What are the needed resources? Is it access to Human Resources support, staffing, equipment?
  • Where are the needed resources? Are they reasonably close that the organization needing them can use them? Are they at the level needed?
  • What efforts are necessary to ensure the resources are usable? Having access to internet bandwidth might be needed, but is there also a need for Information Technology support as well to ensure the bandwidth is usable?
  • What are the relationships of the organizations? Are they able to collaborate together beyond working on a similar goal?
  • Are issues of over-provision of services in the community and additional service deserts?

Some tools I’ve found to be of help are GIS Mapping and Network Analysis.

GIS mapping allows for looking at the big picture. It doesn’t hurt to know about where services are being delivered and where there is unnecessary overlap. Simply adjusting service areas can provide needed support to the community without any additional resources. It also allows the Backbone to map out capacity needs and resources, to look at feasibility of shared support or needs for specific organizations. That van isn’t all that useful if it is owned and used by another organization that is clear across town.

Network Analysis not only can help in mapping out and understanding relationships between organizations, but it can also be used to assess the sharing of resources. Over time, you can assess shifts in flow of resources as well as keep track of needed capacities.

This brings me around to my push for Evaluation as an Intervention. To be effective, the backbone of any collaborative or for that matter, the leadership of an organization really need to understand their capacities and how they can be brought to bear. The results of a capacity evaluation done in a repeated fashion allow these leaders to make the necessary changes (and identify where those changes can be found within their system).

As always, I’m open to your comments, suggestions, and questions. Please fell free to post in the comments section.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Tuesday, June 2, 2015

Evaluation as an Intervention

Today is the day.  Many of you know that I’ve been hinting at this for years, but today is the day that I make my own mark in the sand and tell you about my own “Theory of Evaluation”.  Please note, that much like other theories, mine is an evolution, not a revolution.  I don’t think I’m going to say anything here that is too controversial - unless you really subscribe to a certain viewpoint (which we will get to later), but I thought it was time.

Evaluation as an Intervention

A few blog posts back, I walked you through my growth as an evaluator - starting with a class in organizational development and working to try to understand collaboratives.  There, I introduced you to the idea of evaluation as an intervention - speaking about the engagement of evaluators in program design and use of evaluation data to inform evolution of programs.  Today, I want to make that case stronger and speak more about why I think it is important.

Evaluation is dead?  I don’t think it is, just evolving.

In April of 2014, Andrew Means posted on the page, Markets for Good a controversial post - “The Death of Evaluation” (www.marketsforgood.org/the-death-of-evaluation/).  In it, he takes to task “traditional evaluation” and makes some rather bold statements:
  • Traditional program evaluation is reflective, not predictive
  • Program evaluation actually undermines efforts to improve
  • Program evaluation was built on the idea that data is scarce and expensive to collect
The posting created a bit of a buzz in the nonprofit world as well as the evaluation world.  I heard a great deal of grumbling from fellow evaluators - “how dare he declare this!?!?!?!”  On the nonprofit side, I heard some quiet, “amen”s.  Neither surprised me.  Nor did it surprise me when I talked with a few funders and heard their thoughts - frankly they were as mixed as the nonprofits.  You see, there are still organizations out there, that unlike Mr. Meers, still want to prove (or at least in a more generous viewpoint, find out if) their program worked.  We’ll get to that in a bit, but more interesting is the trend of organizations that want to learn and move forward.  

To fail forward, you need to know why you failed.

In the world of Collective Impact, StriveTogether has promoted “failing forward”.  The only way to fail forward is to had information that the failure occurred in the first place as well as information as to why the failure occurred.  Frankly, in the areas of manufacturing and healthcare, folks are already way ahead of the nonprofit world - concepts such as ISO standards that ensure consistency of work and Six Sigma for failure assessment have been around for quite a while.  We also hear about performance improvement and quality improvement from these areas as well as others.  They have learned that a successful organization needs to be moving forward and improving to be successful.  Incidentally, they all do a great job of marketing around these concepts as well.

Tradition isn’t all it’s cracked up to be.

So, why does “traditional program evaluation” exist?  Why do evaluations that assess over a longer time with the program held constant (or perhaps affected by the environment) still get done?  I can think of a number of reasons, some perhaps good, some not, depending on your values.  Because frankly, it is the purchaser and user’s values that should be affecting what evaluation gets done.  Here are some of those reasons:
  • Intent to generalize the program to other environments - here people just want to know how the program works
  • Desire to market the program to others or to the current funder - the interest is in sales of the program, not changing it
  • Accountability focus - did the program do what it said it was going to do?
“Traditional evaluation” as Mr. Meers calls it, isn’t focused on learning - or at least incremental learning.  It is focused on the “big picture” - did the program do what we thought it was going to do?  As such, it does have its place.  The question is whether that “big picture” is valued or something else.

Programs evolve.  Why can’t evaluations evolve as well?

In my own thoughts on this, I came to an interesting observation.  Programs do not stay put for long.  They tend to evolve.  So, the “big picture”, long-term evaluation project becomes more and more difficult to conduct, because the maintaining a program as it was designed over years becomes more and more difficult to do as well.  While possible, it just isn’t likely that a program will stay exactly the same.  This is one of the reasons why my cousins who focus just on social science research view evaluation as something “dirty”.  They see it as having less control (the friend of research) of variables and as such, the work isn’t as pure.  It is also why researchers try to boost the rigor of an evaluation project - to try to clean up the dirt as much as they can.

But social programs and innovation specifically is very dirty.  There are changes all the time and Mr. Meers, because of his focus, doesn’t value the purity - he values a program that produces results and craves better results (sorry for putting words in your mouth Mr. Meers - I hope I’m right).  And so, I would argue it is time to seriously consider evaluation as an intervention.

Evaluation can’t be predictive, but it can support efforts to improve and by the way, it can use much of the data that is sitting around unrecognized and thus unused.  The trick is to embrace the dirt - such things as local context, environment, and change.  The cycle of collection and reporting also might shift a bit - depending not on the length of a grant or program goals, but more around expectations for change for shorter-term outcomes.  These “performance measures” and the hypotheses associated with them (yes, we are still taking science here) are the information that constitute the intervention.  They are often directly tied to service delivery or at least more closely associated with it.  And, they are meant to be shared with and used by decision makers as things go.  The real intervention occurs where the information is used and the stakeholders adjust the programming.  Another key hallmark of this work is that measures also do not stay constant over time.

Evaluate the system and program as it is, not as it “should be”.

Did I just surprise you?  I think I did!  I actually promoted the idea of dropping and/or changing measures over the duration of the evaluative work!  This isn’t how most evaluation work is done.  Michael Quinn Patton pokes a bit at this with Developmental Evaluation, but I’m saying that no measure is sacred - all can go.  Remember, the goal of this work isn’t to test a program’s impact, or generalize, or for that matter look for accountability - it is to improve the program.  As certain criteria are met and the program evolves, it is quite possible that some, if not all measures, will be replaced over the duration of the evaluation.  Measures that were meaningful at the beginning, become less meaningful.  This addresses Mr. Meers' concerns around evidence of performance and success - if (and it is a very big if) the stakeholders are still truly interested in improvement.  It isn’t the measures or measurer that determines whether a leader is content with their work - rather they only provide evidence for that leader to make an informed decision.  However, if the evaluation is designed to run on rails and get to a specific destination (and only that destination), pack your bags, for that is where it will wind up.

There are some rules to this.

And so, as we think of Evaluation as an Intervention, we have a few “rules” to consider (think of these as decision marks).  Evaluation as an Intervention:
  • Is intended for improvement of programs and systems
  • Is intended for learning
  • Isn’t intended for accountability assessment
  • Isn’t intended for those specifically interested in generalizability of their program
  • Isn’t intended for “big picture” assessments or marketing
  • Can be inexpensive (we will talk more about that in later blog posts)
Is this for everyone?  No, only if you need it.

As for Mr. Meers, I appreciate his view.  It is shared by many, but not by all and it is important to understand the needs.  It boils down to what information is important and to whom.  In the case of Evaluation as an Intervention, it is just that, recognizing and using evaluation to improve a program or system - not just document it.  And, as with all theories of evaluation, there are moments with one theory is more useful than another.  In my own practice, I engage multiple theories to support my work - I just felt that it was time to really define this one and give it a name.

I’ll be writing about this more in the near future as I work through this theory more.  As always, I’m open to your comments, suggestions and questions.  Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Friday, May 15, 2015

What can we learn from a salad?

What constitutes a salad?  What is necessary for something to be a salad?

This was a question my family was kicking around last week - trying to define a salad.  Yes, we have interesting conversations in the Evangelist household.

Merriam-Webster defines a salad as “a mixture of raw green vegetables (such as different types of lettuce) usually combined with other raw vegetables” or “a mixture of small pieces of raw or cooked food (such as pasta, meat, fruit, eggs, or vegetables) combined usually with a dressing and served cold."

But wait!  That could also describe a cold soup!  And, I have had a “hot salad” before as well.  The definition isn’t complete.

We finally came to the conclusion that a salad is like a chair - we are pretty sure we know what typically each is made of, but really it can be multiple things.

How does this relate to evaluation?...  I’m so glad you asked!

One of the key role as an evaluator is describing, or better said, defining the program, project, initiative, system that we are evaluating.  Some are easy.  We know that a pencil is some sort of writing implement with a soft, marking material that is usually graphite (previously lead) surrounded by a material, usually wood.  Well - perhaps even that varies.

What can we learn from these analogies?  That even something “easy” still has a great deal of variability when we wish to define, especially when we generalize.

So, as an evaluator, I can describe what I see.  My salad consisted of a collection of green leaves (of multiple shades of green and shapes - some with stems), grated provel cheese (see here - http://en.wikipedia.org/wiki/Provel_cheese), olives, and a creamy Italian dressing.  It was delicious (ok, so I do some valuing as an evaluator as well).  I can define this salad in various levels of detail, but it does not do to generalize well to other salads.  As we ticked off a list of salads, we at first thought we had something generalizable - some sort of sauce.  It can be found in potato salad, hot spinach salad, even pasta salad.  But…  There are other things that come with sauce - my boiled ravioli with meat sauce for instance (and yes, it too was delicious).

So, the problem was with generalizability.  I can do a great job evaluating the salad and for that matter the rest of the meal.  I can describe it in various levels of detail. I can provide a value judgement on the salad - in fact, I did so above.  But, I can’t generalize the salad to other restaurants.  

But that isn’t quite true is it?  Granted, provel cheese is not normally found in other cities other than St. Louis - but a different cheese can be substituted.  Further, go to a Greek restaurant, and you might get the olives, but the sauce might be different.  And so it goes.  As we move away from an Italian restaurant to other types of restaurants, moving farther geographically, the contents of a salad change.

There are two things we can learn from this.

First - if you ask the chef at the restaurant where I dined (and I did), he most assuredly will tell you that he didn’t make the salad (design and implementation) to be generalizable.  He wants people to come to his restaurant for his salad - not to have people replicate it elsewhere and steal his business.

Second - that context matters.  The culture of the restaurant defines a significant amount of the salad.  I’m not getting a salad like what I described in a Chinese restaurant.  I might get closer in a Greek or French restaurant.

So - you know how everything ties back to evaluation for me.  Let’s explore those two learnings from a programmatic frame.

We have a program developed by an organization (the salad) that frankly, the organization wants to make as unique as possible to differentiate themselves from other organizations’ programs.  The program is designed for a specific culture and environment and the designer isn’t interested in other applying it to their settings.  As an evaluator (funders, you should think about this too), we are going to learn a great deal about the program (salad), we can explore it in depth - but generalizing it to other environments is going to be highly challenging and depending on how different the context/culture you are exporting it to - require such modifications that the new program (salad) would be impossible to tie back to the original design.

Is the cause lost?  No, we just need to pay attention to the context and culture.  There are things in my Italian salad that would work in a Greek salad.  Green leafy vegetables are enjoyed by both pallets.  Olives work as well.  Even the dressing is similar in constituent components.  As the culture differs more significantly, the capacity for generalizability degrades.  The green leafy vegetables might be present in my Japanese salad, but olives are probably not going to be present and the dressing will be significantly different.  Even the container and method by which I consume the salad may be different.

As evaluators, we are often asked to pull out these important lesson for our clients.  In the case of a program that is built and designed for a specific context and culture (frankly, I would say most are) - we need to know and understand how the context and culture affected the program design and implementation.  What tools are present? ( Eating with a fork or chopsticks?  Is cutting the greens at the table ok or is a knife even present?)  Miss these and you are going to advice an organization incorrectly.

So, we must pay attention to the context and culture (environment) of the program, project, system - but we must also understand, if there is an interest in generalizability, the environment to which we wish to port the program to determine what modifications might be necessary.

I’ve been on the soapbox for a number of years that evaluators should be involved in program design - here is a great example of where they can be most helpful.  Often, they are engaged to do some sort of summative evaluation with the thought of taking those learnings and applying them more generally.  But, there is a disconnect that often occurs.  The evaluation is completed on the original program.  The report is created with little thought beyond description and value assessments of the program.  And the funding organization for the evaluation takes those findings and designs and implements something.  Often, the evaluator only knew it was a summative evaluation and did the job - they may even know the purpose, but I’m going to ask a question here…  How many evaluators take it the step farther to ask - “where are you planning on generalizing this program to?”  How many take it the step farther to incorporate an assessment of context and culture for the new environments?  Granted, those steps are often not funded or even considered (most don’t know where they plan to implement next).  But, by keeping the evaluator involved in the program design for the generalized version - they can serve as the critical friend to talk about context, to bring in key people in the communities to share their thoughts - to test what is going to work in the new environments.

As a result, you will wind up with less discordant programs.  Ever see a pizza served in a Chinese restaurant?  These occasionally find their way to kids menus.  I wonder how often they are bought and eaten?

As evaluators and consumers of evaluation, I’m curious to hear your own thoughts on this.  Do you think of these things when you are considering evaluations?  Have you run into programs, projects, systems that are so tailored to a certain environment that the generalizability would be extremely difficult?  Have you a definition for salad that addresses all the possible combinations - including pasta, meat, hot, and fruit salads?  Are we asking too much to attempt to define beyond what we see, to create artificial categories/structures to pin programs to?  If we reject those, how do we learn and share outside contexts?

As always, I’m open to your comments, suggestions, and questions.  Please feel free to post comments.


Best regards,

Charles Gasper

The Evaluation Evangelist

Friday, May 1, 2015

Evaluating Collaboratives - Exploring the Symphonic Metaphor

In my previous Blog, I mentioned that I would be visiting the symphonic metaphor again in the future.  Well, welcome to the future!...

At the time of my writing this, we still don’t have flying cars or jetpacks.  What we do have is a focus on collaboration of multiple sectors to affect positive change in communities.  There are many brands for this type of work, but in reality, it is just organizations of many types (nonprofit, business, civic, etc) and individuals (concerned citizens, elected officials, etc) coming together to try to solve an issue.  To affect this work, there are many steps - and of course, evaluation can offer support to each step.

 

Identifying and Agreeing Upon an Issue

To get there is no easy task.  There are many steps and much can get in the way.  The first issue is identifying what is important.  I’ve a bit of experience with this and you would be surprised at how difficult it is to come to an agreement about what constitutes a community issue.  While not considered a specific evaluative domain by many people (how often have I heard, “there’s nothing to measure yet, we don’t need you), many of the skills evaluators engage can be of use.  Some of the methods I’ve used include:

  • Visioning exercises - These are great for getting people to present issues in a positive manner and often also can be used to establish the goal(s) of the collaborative.  Some prompts have included:
    • It’s 20 years from now and CNN, CNBC, Fox News (whomever) is talking about the major change that happened in your community, what was it?
    • You are being interviewed for the newspaper about what you accomplished, what was it?
    • You are met with a genie and given 3 wishes for your community, what are the things you wish for?
  • Service overlap mapping - This is great for starting the conversation around what people/organizations are bringing to the table.  This is like a heatmap versus a geographical map.  Here we often follow with additional questions:
    • Why are you providing the service?  (And you can’t just say there is a need.)
    • Where are there gaps on the map (service deserts)?  Why are they there?
    • What do the services have in common?

The neat thing about the two above methods is that you are attacking the problem from two different directions.  In the first case, you are just aiming for the result (impact, outcome).  In the second, you are looking at what people are doing and allowing them to weave it together into a meaningful result for the group.

Incidentally, you are also starting the set up of your program theory and evaluation framework as you are establishing the long-term outcomes they are collectively shooting for and then working backward to individual organizational outcomes and activities.

 

Identifying and Agreeing Upon What the Collaborative Is Doing (Or Will Do)

As an evaluator, you want to know what the activities are.  As a community activist, you want to know what your partners are going to do to support the cause.  This is another sticky issue as many organizations/individuals might not recognize the contributions of others as relevant or appropriate.  This is where I like to help by using the results of the previous work.  We have our agreed upon impact - we now need to agree upon what outcomes predict success.  We often rely on the organizations and individuals to provide us with their theories of impact (we can talk about this another blog post in the future).  When drawn out and discussed, the map can look something like this:

NewImage

 

 

 

 

 

 

 

 

 

A fantasy author, Michael Moorcock is the originator of the design idea - his symbol for chaos.  And it is chaos that can occur if there isn’t “alignment” of the efforts - in essence, the community’s impact goal is never achieved because everyone is pulling hard, but in different directions.  The evaluator, through the clarity of the theory of impact can help the organizations and individuals involved see can happen and with data, may be able to articulate it.  This service helps the group agree upon efforts.

 

Note of Caution

Please note, I’ve simplified this.  In reality - we are about 2 or so years into a collaborative’s work and if we are lucky, we now have agreement on what we are trying to accomplish.

 

Changes

So we have agreement on what we are trying to accomplish and we are in theory pulling in the same direction.  As part of this process, you are going to be talking about definitions and clarifying indicators of activity and outcomes.  Well - now the evaluator moves to a more traditional role, tracking activities and outcomes.

Much like any individual program, there are changes that occur.  All are often focused on the impact on the community as measured by these changes.  However, there are other impacts that seem to accompany collaborations.

  • Changes in relationship and collaboration among the partner organization and individuals
  • Individualized organizational change

When thinking about these collaborations, we really need to attend to all of these.  There are shifts that occur in capacity.  While I’m plugging the work of my organization here - the TCC Group has a fantastic paper on what we call Capacity 3.0 - http://www.tccgrp.com/pubs/capacity_building_3.php.  It speaks to how we need to build capacity thinking about the social sector ecosystem and how organizations need to understand, respond to and structure themselves to adapt to changes in the ecosystem.  Well - this informs some of my own thoughts, not just from one organization’s standpoint, but across a collaborative.  Partners need to see those changes and calibrate to collaborate effectively.  The evaluator can provide that data, if they are tracking all three change arenas (not to mention also looking at the other environmental factors).

And So On To the Symphony

As a collaboration forms, we are able to see how the symphony is a good metaphor.  Prior to the curtain going up and the conductor taking the stage, we have sounds of music.  As each instrument tunes, their individual melodies of practice float through the air.  In combination, they are sometime discordant and chaotic, but there are also moments were they seem to flow into a strange synergy.  These are those accidental combinations that can occur in the field.  But with the conductor (not the evaluator - we just are the critical friend/listeners), we can help the orchestra practice.  Issues such as:

  • Choice of music
  • Selection of instruments for the piece
  • Sheet music to follow
  • Parts for the instruments to play
  • Timing and pace of the piece

Can be addressed.  And like the orchestra, this work takes practice to improve.  The evaluator helps by providing the feedback to the conductor and the other partners in the piece - providing feedback to the key council or leadership of a collaborative and partner organizations.

As always, I’m interested in your thoughts.  Please post comments, suggestions, or questions about what I’ve shared.  I’m interested in learning from you as I share my own thoughts here.  Please feel free to post comments.

Oh - one more thing…  While I did allude to my employer - the TCC Group.  Please note that these are uniquely my thoughts and do not necessarily represent the thoughts of the organization.

 

Best regards,

Charles Gasper

The Evaluation Evangelist

Tuesday, April 14, 2015

My Return as a Blogger and My Life as an Evaluator

I was reading through my previous Blog posts over the past few years and I came across a post I made in November 17, 2010 – the title?...  Wait for it…  My Return.  There it had been over a year since my last post.  Well, once again, I’m writing about my return and once again, it has been over a year.

Changes

Today, I shared with many of you a change that occurred in my life.  I had been considering doing something like this for well over a year and finally decided it was time to switch things.  Earlier this month, I started working for the TCC Group.  TCC works with Foundations, Nonprofits, and Corporate Giving.  The institution has over 30 years of experience in the areas of strategy, grants management, capacity building and evaluation.

I was thinking it was a good idea to write about where I have been for the past year (really years) and why I haven’t written.  Then I realized – I have been working as an evaluator for over 20 years, have I ever really told you why I got into the business?

Have a seat, I’m going to share a bit here about how I got here.

Setting the Hook
In 1991 or so, I was an undergraduate student at Santa Clara University.  There, I was working with someone who would greatly influence my life, Dr. William McCormack.  One of the courses was on organizational development and we were required to work with an organization, writing a case study.  That experience drove me to think more about organizational effectiveness and impact – but it took a few more steps to put me on my path.

There was some additional evaluative work done, but in 1995 I found my love.  I was working on a project to assess the impact of a program on the quality of life of the individuals.  When the evaluation was completed and we submitted our findings, the state took our work and changed the program – improving the lives of the people served.  Thousands of people were impacted by my work!  I had found my calling.

Learning that Learning is Important
Along the way, I worked in quality management for a health system.  This experience molded my view on evaluation further, evolving it away from summative, value-focused work to learning and improvement.  Our work improved the experience and health outcomes for patients in hospitals and also resulted in organizational savings for the health system.  We weren’t conducting long-term studies, but instead focusing on short-term outcomes that predicted long-term success.  The mantra of the day for organizational change was, “what can we get done by next Tuesday?”

Movement to Large-Scale Impact and Collaboration
In 2007, I became a Director of Evaluation for a large health-focused foundation, the Missouri Foundation for Health (MFH).  It was time to think differently again – or perhaps better said, to broaden my thought.  Prior to coming to MFH, my focus was on program evaluation (even if the programs were larger).  Now it was time to see how multiple programs (yep, was still focused on programs) could interact with one another to affect larger scale impact.  During my stent at MFH, I also returned to graduate school and while my evaluative practice was informed by program theory, it truly shifted to being theory-based (I’ll talk about that in a later blog).  I started to think about systems in a broader sense, not only seeing how an individual program interacted with a larger system, but how systems change can affect improvement across broad swaths of issues.

Evaluation as an Intervention
About this time I started thinking about evaluation differently.  I recognized that programs and systems evolve over time and that evaluation can better support the effort if it doesn’t stand completely separate.  Provision of shorter-term information tied to program theory can better inform the evolution of programs and identify where the efforts are being effective.  There has been a shift in AEA in recent years, now recognizing that evaluators can be and some think should be involved in program design.  My epiphany in 2010 was my tied to my recognition that my role as an evaluator in a foundation should support such work.

Collaborative Impact
Most recently, I’ve focused on how collaboratives are built and the results of the collective effort.  I’ve been assessing how these efforts combine with and attempt to change simple and complex systems.  You can think of these as multiple programs working in concert.  In reality, the concert often sounds like a bunch of instruments tuning up versus following a score.  My work has focused on how to get the instruments to play together in the same hall, follow an agreed upon score and perhaps follow the baton of an agreed upon conductor.  (We will revisit this analogy in a later Blog post.)  There is a great deal to learn about collaboratives.  Certainly, this is something that has been done for years.  Folks have given it different brand names, but really it is just about learning how groups of people, organizations, civic leaders, and communities can come together to affect systems change.

Which Brings Us to Today
The process of moving to the TCC Group made me contemplate my practice as an evaluator.  In the interest in keeping this post relatively short, I only shared some of the revelations I made.  My journey reflects two key things, 1) my own personal growth in the field and 2) evolution in the field of social change.  To be clear, programs are still important.  Valuing the impact of those efforts is also very important.  However, sustainability of the programs and their supporting organizations, organizational and communal learning, and systems change have become more important as those attempting to affect larger scale change turn away from focusing on just their work to look at the environment around them.

And So It Goes
I hope you enjoyed reading about my evolution as an evaluator.  Much like the programs and systems we evaluate, my practice will continue to grow.  I would be very interested in learning your stories around how your engagement and understanding of evaluation has changed over time.
As always, I’m open to your comments, suggestions, questions and yes, your stories.  Please feel free to post comments.

Best regards,
Charles Gasper
The Evaluation Evangelist

Friday, March 21, 2014

March Madness and Evaluation

Let’s just get it out of the way - March in the United States is about basketball, college basketball to be exact.  The colleges have been playing since November, but it is March where the general public (people who normally don’t follow collegiate basketball) suddenly get interested and involved in filling out “brackets”.  If you haven’t seen one of these - here is mine...

IMG 2256 

So what is the relationship between the assignment of schools to the brackets and evaluation?  Well, that isn’t such a simple explanation.  What you see are my selections, but the story isn’t about what I wound up selecting, but rather the process I took in selecting them and the manner in which others make their own choices.  Let’s be honest, one of the major reasons we engage in evaluative work is to predict how something/someone will perform in the future.  In the case of selecting which schools are going to advance through the tournament, there is a good amount of evaluative data on the page for you to see.

The schools are ranked going into the tournament.

Their individual win/loss records are presented.

The location the games are being played are also presented.

Finally, who they are scheduled to play are part of the brackets.

While the analysts and bookies that spend significantly more time on this than me bring additional evaluative data to bear, we do have some interesting data which speaks to how we use evaluative data to make decisions.  Let’s break this down by the manner in which I looked at information and came to the startling conclusion that UCLA will win this year’s NCAA tournament.

One of things you will note, if you scour over my selections is that the higher ranked teams tend to be the ones I picked.  They also tended to have a higher win/loss ratio.  This basal information explains some of my decision process.  However, context of the game also plays a factor.  Much is made of the impact crowds have over games.  While a different sport, the Seattle Seahawks in professional American football and Texas A&M’s collegiate football team promote their in-stadium fans as the “twelfth man” - recognizing their “contribution” to the game.  So, the context of the game and a guestimate of the ratio of fans in the arena played a factor.

There is also missing data that went into the decision process…  Wait!  Did I say, missing data?

It isn’t missing, rather it is evaluative data that I used that wasn’t included on the paper - from other sources of “truth”.  To share a bit more about me, I’m an athlete and a coach.  I have participated in and coached multiple sports at a competitive level and recognize that in addition to the crowd’s influence in the outcome of the game, there are other factors that can affect an athlete’s performance.  This leads me to explain why I picked UCLA to win, much as less why other teams advance over others.  Namely, I’m familiar with the history of UCLA basketball and the fact that the coaches and the athletes can “call upon” that history to give them a bit of extra boost in their games.

This all leads me to explain more around why I’m using the tournament as a metaphor for a program and the information in the brackets as evaluative data.  There are a few lessons to learn here.

  1. I’m taking into account both the context of the game (program) and incorporating my sources of truth.  Good evaluation practices should incorporate what the stakeholders revere for information and data and the context of the the program implementation.  Better use of evaluation data for decision making should also take these factors into account.
  2. I’m clearly using flawed data for my decision making.  Just looking at my final game - UCLA versus Michigan.  I know UCLA’s history, I don’t know Michigan’s.  It may be that Michigan has an incredible history as well, and to be honest, there is a nagging part of my brain that is saying that there is something there as well, something big.  However, I made decision to ignore that part of my brain when making my selection.
  3. Speaking of ignoring information, let’s look at my decision to pick Stanford over Kansas.  And this highlights decisions that can be disguised as informed by evaluation, but in fact are made “from the heart”.  Living in Missouri, there is a bias against Kansas - don’t ask me where it comes from, but there has been at minimum a rivalry for years between the states’ schools.  Add that I attended a Pac-12 school for a period of time and my “allegiances” and thus decision making becomes clear. The lesson here is that while we would like to say that our programmatic decisions are driven by evaluative data, our own biases do creep in.  As leaders who use evaluative data to make decisions, we need to recognize our biases and be honest with ourselves and our teams.

It is these lessons that will serve us all well in both working as evaluators, but also as consumers of evaluative information and decision makers.  Paying attention to the context of the program, the agreed upon sources of truth, the fact that some information may not be found in the official structures of the evaluation, helps improve our understanding of the program and inform decision making going forward. And even if we have all the data we need, the complexity of the program may result in a different outcome than we expect - as history will certainly prove with my bracket results.

As always, I’m open to your comments, suggestions, and questions.  Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Monday, August 20, 2012

Complexity - Excuse or Misunderstanding?

Is complexity an excuse or evidence of lack of understanding? Friends, have you ever had the situation where you have asked someone to explain why something happened, or perhaps why they are feeling a certain way and you have gotten the response – “its just too complex to explain”? I wish I could see the wry looks on at least some of your faces or perhaps the silent nods. One of the major responsibilities of an evaluator is to attempt to understand the program or organization they are evaluating. As most of you know, especially if you have read my past Blog posts, I adopt a theory-based framework for my evaluative work. That means that I’m constantly asking not only about the connections between activities and outcomes, but also why the team believes the relationships exist. Often times, I get a response that is very much like the quote above – “our work is just to complex to consider, much as less model!” Yet, the core concept of theory-based evaluation is the idea that we can get to the underlying connections and reasons.

Much like the Kuebler-Ross Stages of Grief (http://en.wikipedia.org/wiki/KĂ¼bler-Ross_model), many of the teams I’ve worked with have gone through stages that start with “it is too complex” to “yes, that is what we do and why we do it”. Once we get people to actually agree to engage with us, their models reflect the complexity seen in models such as this.

B69B44AA-0E89-4209-AA89-2C2EC37840B3.jpg


If you remember your high school physics class and had paid attention during any astronomy class, you will remember that the model above found at (http://farside.ph.utexas.edu/teaching/301/lectures/node151.html) is the Ptolemaic model. To make the complexity of the model work with Earth in the center, the planets need to orbit around a central point as that central point orbits around the Earth in a direction opposite that which earth rotates. The machine necessary to model this looks like something like this, found here (http://remame.de/wp-content/uploads/2010/03/astrolabe_2.jpg):

7E0A52DC-58F5-43B7-AD50-90ECE6FB6AC1.jpg

Note the complexity of the gearing and process to model the complexity. However, there is another step – moving to a simpler model and that requires the team to take a step back and not have a geocentric viewpoint of their own program or organization, but rather to try to look at everything a little bit different. In the case of astronomy, opening to the notion that the center of gravity for our local solar system resulted in a simpler model – something less complex found here (http://biology.clc.uc.edu/fankhauser/classes/Bio_101/heliocentric_model.jpg)

902893B2-7733-48AF-AA4A-82B336D2D35D.jpg


and results in simpler mechanics as can be found here (http://www.unm.edu/~physics/demo/html_demo_pages/8b1010.jpg)

5B9D2384-CFB3-41D9-BF8A-6A200D0F4177.jpg

The simpler model allows for a more accurate representation of what is actually happening and then allows for corrections such as the fact that the planets do not orbit the sun in a perfect circle. In organizations and programs, similar moments of clarity allow the team to test deeper assumptions and improve their associated projects.

Now, let’s be honest organizations and their programs, much like true orbital mechanics aren’t simple – there are layers of complexity. However, there is true complexity and there is complexity driven by poor assumptions or inability to stop and look at things objectively. The role of the evaluator is to help break down these viewpoints and help the team see through the complexity they have invented due to their preconceived notions to help them see the true underlying mechanics of their work and its outcomes. The process isn’t easy and in some cases, I’ve found that the work I do is more like a therapist than like a researcher. There can be displays of frustration and anger as the team works its way through understanding their organization or program. And much like some therapy sessions, the team can pretend that there is agreement among them when there isn’t – unifying against the evaluator to avoid the pain of the experience and/or the possible pain of discovering that their world view isn’t as clear as they would like. I will write more about process another day, but suffice to say, opening people to other views can be rather difficult work.

So back to the original question, is complexity an excuse or evidence of lack of understanding? I’ve often found it can be both and with that in mind, the wise evaluator, interested in understanding the theories of an organization or program, will continue to try to get their team to “simply” their model of their theory. It is in that simplification that real and difficult discussion occurs that provide insights as to what the organization or program is trying to accomplish and how.

Also, please note that at no point did I say that complexity isn't a part of everything we do - it most certainly is. However, experience would indicate that when we think about what we do and how we do it, our mental models are significantly more complex than reality. Further, our perceptions of what we do and why is often colored by how important we want to feel and how much we desire others to understand how difficult it is to be us. To those of you who fight to help teams tease out the try complexity from the self-generated complexity… To those of you who struggle to bring clarity to a complex world… Thank you!

As always, I'm open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Thursday, January 5, 2012

If Your Friends Were Jumping off a Bridge, Would You Do It Too?

Ok, show of hands – how many of you had parents that asked the title of blog or a similar question when you were a teenager? I’m looking forward to a number of years from now when I utter those words out loud. Truth be told, I have had several opportunities in the past years to say something similar when working with organizations around evaluation.

It doesn’t take much to recognize that I’m a strong proponent of evaluation. Would a guy who didn’t think evaluation was important call himself The Evaluation Evangelist?... However, I am also a strong proponent of use of the information gleaned from an evaluation AND very much against wasting resources on creating information that will not be used.

I’m going to ask you to raise your hands again here… How many of you have been asked to design an evaluation and when you ask your client those fateful words – “what would you like to learn?”, you get a response of a blank look, confusion, or something to the effect of, “we don’t know, we were hoping you could tell us”? Trying to get more information, you might follow up with a question like, “why do you want an evaluation done?” and get the response of “the funder wants one”, “we are supposed to do an evaluation”, or the like. More often than not, I find myself on the receiving end of one of these responses.

As consultants, do you find yourself trying to design an evaluation for a client that doesn’t know what they want or why they are hiring you to do the evaluation? As program or organizational leaders, are you finding yourself hiring evaluators without knowing what you plan to get out of the evaluation? My guess is that at least some of you are nodding your heads or at least remembering a time when you might have found these to be true.

So, why is evaluation so popular these days? As people interested in the promotion of evaluation, why should we care as to why evaluation is popular and just enjoy the fact that interest is increasing? As an evangelist, shouldn’t I just be content that people are now asking for evaluation and thus I’m employed to help them understand what evaluation can do for them? To this, I must answer an emphatic NO!

Evaluations done just because a funder requires it or because the leadership has heard or read somewhere that it is a good thing to do (or worse, because it just is something one must do) will end up not being used. At best, the contracted or internally hired evaluator might be able to work with the organization to identify evaluation questions – but in the end, the organization needs to be the one driving the questions.

Metaphorically – think of the joke about the drunk that has lost a quarter and is looking under the streetlight. Along comes a guy who asks the drunk what he is looking for and the drunk tells him about the quarter. The guy asks the drunk where he lost the quarter and the drunk points off in a direction and says, “over there”. When then asked why he is looking under the streetlight, the drunk says, “the light is better over here.” I liken this experience to the organization that is asking for evaluation without guidance. In this case, the drunk (the organization) wants help to find something and the guy (the evaluator) winds up having to ask all sorts of questions that may unpack an issue to address.

But it can be and often is worse… For these organizations often don’t have evaluation questions formulated, it is as if the drunk is searching for something, but doesn’t know what it is. He may actually have never lost the quarter in the first place. As such, the helpful evaluator might find a different quarter, a dime, a stick of gum, and a rusty bolt on that sidewalk as well. All these things might be useful in some ways to the client, but since he doesn’t know what he is missing (if anything), he may not value the findings. As such, the evaluation findings are not used.

Now, some may argue that there are situations where having evaluation questions on the front end isn’t a good thing. Perhaps those situations exist, but even then, I would hope that there is some reason for engagement in evaluation other than just because it is done or others are doing it.

So dear reader, I leave you with a thought for the next time you consider an evaluation (either requesting one or supporting one). Think to yourself, why are you on the bridge and why are you considering taking the leap. Is it because it is in support of thought out evaluation questions or because everyone else is doing it?

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, July 13, 2011

Educating This Generation’s Evaluators

Some of you may know that I am on an incredible learning journey called “Graduate School”. I am nearly a year into this experience and can honestly say that my own thoughts around theory and practice have been influenced this past year. Clearly, the work of Claremont Graduate University as well as Western Michigan and a few other schools are brining a focus and commitment to professional evaluation otherwise not found. They are creating masters and doctoral “prepared” professionals into the world to engage nearly anyone they can in evaluation. If you want to get a sense of how important I think that is, just skim through the titles of my previous postings over the years. I think there is room for more of these institutions around the globe as my own experience of being the ultimate commuter student to pursue my own PhD has taught me. You see, there wasn’t a school nearby with a strong evaluation focused program in which I could expand my own knowledge and expertise. There wasn’t the community of thinkers locally available. So, first with Claremont’s Certificate Program and later my application and acceptance to the Graduate Program – I found my community. However, I think the stars aligned and I was lucky. Claremont and just started the Certificate Program (I was in the first cohort) and if it wasn’t for the vision of the leadership of Claremont’s Psychology Department with Stewart Donaldson at the helm, I would be stuck, wishing.

As you can probably guess, I have an idea… Well, a few anyway.
1) Online programs have a bad reputation in the academic world. There is a viewpoint that they are not as rigorous as residence programs. This viewpoint needs to change. Online participation in residence programs is now possible – my experience is case in point. In fact, there are times that I believe I get a superior experience to the resident in the classroom, having access to a teaching assistant with whom I can discuss thoughts and ideas that occur to me during the class that I wouldn’t want to disrupt the class in their vocalization. Granted, my experience is a bit different than other online experiences – perhaps in the area of requirements. But, with a bit of effort, the technology is currently present to maintain those requirements – even when the student is thousands of miles away from the campus. I suggest that the schools that educate and train professional evaluators examine this idea more closely and experiment.

2) Workshops at conferences, institutes, and the like are good entrĂ©es to topics, techniques, and theories of evaluation – but that about covers it. The onus is on the “student” to seek out additional venues of learning, usually books or websites. AEA has done some fantastic things to offer more information to members in the form of AEA365, its Linkedin group, EVALTALK, and others. EVALTALK was my link with the evaluation community, a place to ask questions from time to time and Linkedin as assumed some of that role as well. AEA365 provides great tips and links to useful ideas – but there is still something missing, an organized, progressive training opportunity for evaluation professionals.

On a daily basis, I work with both amateur and professional evaluators. Frankly that differentiation is unfair. I work with folks along a spectrum of evaluation knowledge and skill. I engage academics that have poor evaluation knowledge and skill as well as academics that are highly knowledgeable in this arena. [At some point, I will write more about the differentiation between content experts and evaluation experts – something a good number of nonprofits and funders misunderstand.] I also engage individuals with bachelors and masters degrees in fields not traditionally associated with evaluation or research that are highly knowledgeable and yes there are those with little knowledge in this category as well. Sending all of these people to a workshop to learn aspects of evaluation is not going to go far in improving their abilities. They need more support than that.

My own work in this area is leading me to a coaching model for engaging and training those lower on the evaluation knowledge continuum. In such a model, technical assistance in the more traditional forms of workshops and one-on-one training occurs – but that the “instructor” or “coach” continues to have contact with the “students”, providing continual education as needed for the “student”. Like a player of a coached team, the “student” receives the training and then is allowed to “play” (conduct appropriate evaluation work at their level) with additional mentoring and advice from the coach. Occasionally, the “student” returns for training (again, envision a team practice) for additional skills/knowledge development. We are testing this in a few projects I’m associated with and if you happen to attend this year’s AEA conference in Anaheim (http://www.eval.org/eval2011/default.asp), you are most welcome to catch a presentation sharing our experiences with this in one organization and where our theory of capacity building has evolved.

However… This still leaves a large gap in the education of evaluators – specifically the group I would call semi-professionals. These are the people on the middle of the continuum that have perhaps a master’s degree or even a strong research focused bachelor’s degree. They often have been practicing evaluation for a shorter period of time and if they are lucky, work in an organization with a more experienced and/or better trained evaluator. But often, they are not – and they are looking for additional educational opportunities. They may sign up and attend workshops on topics, but as mentioned earlier, these are just teasers relative to the depth of focus found in a graduate level course on the topic. Oh – and the reason I can speak about this is this was me many years ago and as I mentioned, I eventually got lucky. But until I got lucky and was able to find a program that was a good fit and allowed me to stay in my profession – I did what most of these semi-professional evaluators do. I attended workshops, conferences, and read books, journal articles, and posed my questions on EVALTALK. And honestly, it wasn’t enough. Yet, with exception to a few opportunities, there really is not much out there for the advancement of people falling into this category. Some are early enough in their careers that they can make the move to a direct residence program. In my case, the residence program accommodated me. But, there need to be more opportunities like mine – otherwise, we are leaving the semi-professional evaluators to their own devices with little support.

Do you have ideas to how to build evaluation capacity and knowledge? Please share!


As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, February 23, 2011

Language and Evaluation

A great many people have spent a great deal of time thinking about what differentiates evaluation from research. I won’t press too far in this, other than to share that if you Google for the statement “difference between evaluation and research” as of the posting of this Blog, you would get over 5000 hits. Now, I’m sure there is much repetition in the form of quoting others and the like, but still – 5000 pages in which that statement occurs. Well, I’m going to talk about one aspect that affects my life almost on a daily basis – issues of language.

The current state of the art of evaluation suggests that a good evaluator is one that engages his stakeholders. What does that really boil down to? You have to talk to the stakeholders. Now, stakeholder is actually a very large word – and no, I don’t mean the number of letters or the fact that it might be a higher order vocabulary word for the SAT, ACT, or GRE. Rather, the concept of a stakeholder can spread across many different groups of individuals depending upon what sort of approach and philosophy you have about programming and evaluation. I’m not going to go into the various possible combinations, but suffice to say that you can be dealing with people who fund the program, implement the program, participate in the program, are not participating in the program, are in the community where the program is implemented, are not in the community where the program is implemented, and on and on and on. The combinations aren’t so much important to this Blog post as is what consists their background, understand, and vocabulary.

A few years ago, I had a discussion amongst individuals who all work for the same organization. These individuals all held the same position within the organization. This organization used and currently still does use the word – Objective. When asked what the word meant, the broadest definition could be – what the program is trying to achieve. However, that is where things broke down. For some, Objective meant a Programmatic Outcome. For others, Objective equated to a Programmatic Output. Still for others, an Objective was an Organizational Outcome. And for yet another group, it was a change in Organizational Infrastructure. All were focused on “Measureable Objectives”, but no one really agreed on what an Objective was. After a year’s worth of discussion and negotiation, we came the agreement that and Objective would be a Programmatic Outcome or Organizational Outcome. At least we got it to an “Outcome”.

When was the last time you had to have a discussion about the language amongst researchers? Ok, those of you who do language research, put your hands down! You get my point, I hope…

But the point was driven home to me again today. In a meeting with folks that I met with, along with another three evaluators, we discussed an evaluation project we are designing. During this meeting, I uttered another word, that I thought we all understood – “Comparison Group”. And was shocked to discover that their impression of what the term meant and my own impression and that of the other evaluators diverged. When they heard “Comparison Group”, they translated that to “Control Group”. They had a decent definition of a Control Group and we all know that engaging a Control Group for a study can require significantly more resources than engaging a Comparison Group, especially when the Comparison Group is not individually matched.

[Pausing for a moment here, because my own language may differ from your own… Control groups are usually associated with random control trials (RCT) and the costs of engaging in a RCT in community based programming and evaluation are very high. Control groups are a subset of comparison groups, which are just a group with whom you compare the outcomes of the group that experienced your program.]

The meeting around this study was rapidly devolving and the design was in jeopardy until I figured out that this was a language issue and not a design problem. The team had agreed to the design. They were under the impression that I was forcing a more rigorous study that would be costly across several domains. I was under the impression that they were stepping back away from the design and wanting something significantly less rigorous. Conflict was brewing. Fortunately, the issue of language was identified before things spun out of control.

I’ve presented the idea before and I’ll present it again. We need better-informed consumers of evaluation. Too often, I find myself and other evaluators changing language and/or dropping evaluation vocabulary out of discussions to attempt to avoid misunderstandings. I’m starting to wonder whether we are doing our clients and ourselves a disservice for this. In our own desire to make things easier for everyone in the short-term, we might be causing issues for the next evaluator. Worse, like the discussion around the term Objective, our looseness of language might cause more confusion. I’m considering short study for myself – to keep the evaluation language in and attempt to be more precise in my definitions with my clients – to see if I can reduce confusion. Anyone else want to give this a try? I would also like to hear your thoughts on the idea.

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, February 9, 2011

The Man in the Middle

I started writing this post about two days ago and discovered rather quickly that I was writing more than should fit in one Blog post – so… Instead I’m going to subject you to a series of posts discussing the rationalization of the wants/needs of the funder around evaluation with the wants/needs of small to medium sized nonprofits with whom I’m familiar. Tossed in will be some reflections on some reading I’ve been doing for school and of course, you get my own opinions and thoughts – full force!

To begin, I would suggest you take a look at my Blog post in January (2011 if you are reading this years from now – hello future people!). Go ahead – I’ll still be here when you get back…

So, you read my comments about my frustration with the lack of outcomes information coming to me from organizations soliciting me for donations. Well, those winds of change came quick and a partner in funding is looking for outcomes as is my own Board. My CEO, who gave me the name of the “Evaluation Evangelist” has pointed out to me a few times – “a prophet is rarely heard in his own land” and my previous warnings about nonprofits and foundations needing to attend to outcomes (versus outputs and other processes) was unheeded. And as with all crises, I think we are at the beginning of a change.

Before I go further, I should tell you that while I believe that we should always consider outcomes of programs, projects, advocacy, and whatnot – there is a time and place for evaluating said outcomes. This is tied to the questions the stakeholders have for the evaluation and what is possible to measure, given the theory of change of the program. Today, the “top” funding stakeholders are asking for outcomes and unfortunately, because of their attention, nonprofits are going need to react. Why do I say, “unfortunately”? - Because the interest in programmatic outcomes didn’t originate in the nonprofits delivering the program.

Granted, I have access to a small number of nonprofits, but in their study of nonprofits – Reed and Morariu (www.innonet.org) found that more than 75% of the nonprofits spent less than 5% of their budget evaluating their programs – 1 in 8 spent no money. Additionally, funders were characterized as the “highest priority audience for evaluation” and surpise – outcomes and impact evaluation were rated as the highest priority. So, my experience with nonprofits, while small, does seem to echo the broader population of nonprofits.

So, if this has been as it always has been – otherwise we wouldn’t have the results of the Innovation Network’s State of Evaluation 2010, why would I be concerned? Sure, I have been an advocate for evaluation use and just because I’ve been advocating for it (bigger names than mine have for a lot longer), that shouldn’t affect change. In fact, one could argue that I should be pleased – interest in evaluation is increasing in the funding community. Except, there is little education for the funding community around evaluation. There is little use by the funding community around evaluation. And the expectations that are coming out of the funding community are the equivalent of taking an older car that has never gone faster than 20 miles per hour and slamming on the accelerator to go 80 miles per hour (for those of use that use metric, you can substitute KPH and still see the analogy). Nonprofits that had at best conducted some pre-test/post-test analyses of knowledge change in participants in their program (more likely did a satisfaction survey and counted participants) are now being required to engage in significantly more sophisticated evaluations (ranging from interrupted series designs to random control trials). The level of knowledge required to conduct these types of studies with the implied rigor associated with them (I say implied if only because I can find a comparison group for anything – it just might not be appropriate) simply does not reside in most nonprofits. They haven’t been trained and they certainly don’t have the experience.

The funding community’s response is to offer and in some cases require an external contractor to support the evaluation. This could lead me to talk about difficulties in finding qualified evaluators, but we won’t talk about that in this post. It is an issue. However, what occurs with the involvement of an external evaluator? They do the work to support the funder’s objectives and after the funding for the project ends – they tend to leave too. There is also an issue around funding the evaluation at the level of rigor required – that too will come in another post. But, the message I want to leave you with here is that engagement of an external evaluator does little to increase the buy-in, much as less, capacity for the organization to engage in internal evaluation. The “firewall” preventing bias of an internal evaluator (e.g. organizational pressure to make the organization look good), while certainly improving the perception of the funder that the evaluation is more rigorous, does little to help the nonprofit other than to aid them in maintaining the current cash flow. [Incidentally, I’ll address the internal versus external evaluator conflict in a later post as well. I think this is something we can all use to explore.]

So – what am I advocating for? Let’s not take that older car on the highway just yet. Let’s listen a bit more closely to evaluation thought leaders like David Fetterman and consider what we can do to improve the capacity for organizations to do their own evaluations. Let’s show them how attending to outcomes might help them improve their organization and the services they provide to their participants. Perhaps we should think about evaluation from the standpoint of use and then apply the rigor that is reasonable and possible for the organization. Bringing in an external evaluator that is applying techniques and methods beyond the reach of the organization results in something mostly for the funder, not for the nonprofit. At best, the nonprofit does learn something about the one program, but beyond that – nothing. To them, it could almost be considered a research study versus an evaluation. Let’s partner with the nonprofits, get them up to the speed we want to get them, with careful consideration and deliberation versus just slamming on the accelerator.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist