Monday, August 20, 2012

Complexity - Excuse or Misunderstanding?

Is complexity an excuse or evidence of lack of understanding? Friends, have you ever had the situation where you have asked someone to explain why something happened, or perhaps why they are feeling a certain way and you have gotten the response – “its just too complex to explain”? I wish I could see the wry looks on at least some of your faces or perhaps the silent nods. One of the major responsibilities of an evaluator is to attempt to understand the program or organization they are evaluating. As most of you know, especially if you have read my past Blog posts, I adopt a theory-based framework for my evaluative work. That means that I’m constantly asking not only about the connections between activities and outcomes, but also why the team believes the relationships exist. Often times, I get a response that is very much like the quote above – “our work is just to complex to consider, much as less model!” Yet, the core concept of theory-based evaluation is the idea that we can get to the underlying connections and reasons.

Much like the Kuebler-Ross Stages of Grief (http://en.wikipedia.org/wiki/Kübler-Ross_model), many of the teams I’ve worked with have gone through stages that start with “it is too complex” to “yes, that is what we do and why we do it”. Once we get people to actually agree to engage with us, their models reflect the complexity seen in models such as this.

B69B44AA-0E89-4209-AA89-2C2EC37840B3.jpg


If you remember your high school physics class and had paid attention during any astronomy class, you will remember that the model above found at (http://farside.ph.utexas.edu/teaching/301/lectures/node151.html) is the Ptolemaic model. To make the complexity of the model work with Earth in the center, the planets need to orbit around a central point as that central point orbits around the Earth in a direction opposite that which earth rotates. The machine necessary to model this looks like something like this, found here (http://remame.de/wp-content/uploads/2010/03/astrolabe_2.jpg):

7E0A52DC-58F5-43B7-AD50-90ECE6FB6AC1.jpg

Note the complexity of the gearing and process to model the complexity. However, there is another step – moving to a simpler model and that requires the team to take a step back and not have a geocentric viewpoint of their own program or organization, but rather to try to look at everything a little bit different. In the case of astronomy, opening to the notion that the center of gravity for our local solar system resulted in a simpler model – something less complex found here (http://biology.clc.uc.edu/fankhauser/classes/Bio_101/heliocentric_model.jpg)

902893B2-7733-48AF-AA4A-82B336D2D35D.jpg


and results in simpler mechanics as can be found here (http://www.unm.edu/~physics/demo/html_demo_pages/8b1010.jpg)

5B9D2384-CFB3-41D9-BF8A-6A200D0F4177.jpg

The simpler model allows for a more accurate representation of what is actually happening and then allows for corrections such as the fact that the planets do not orbit the sun in a perfect circle. In organizations and programs, similar moments of clarity allow the team to test deeper assumptions and improve their associated projects.

Now, let’s be honest organizations and their programs, much like true orbital mechanics aren’t simple – there are layers of complexity. However, there is true complexity and there is complexity driven by poor assumptions or inability to stop and look at things objectively. The role of the evaluator is to help break down these viewpoints and help the team see through the complexity they have invented due to their preconceived notions to help them see the true underlying mechanics of their work and its outcomes. The process isn’t easy and in some cases, I’ve found that the work I do is more like a therapist than like a researcher. There can be displays of frustration and anger as the team works its way through understanding their organization or program. And much like some therapy sessions, the team can pretend that there is agreement among them when there isn’t – unifying against the evaluator to avoid the pain of the experience and/or the possible pain of discovering that their world view isn’t as clear as they would like. I will write more about process another day, but suffice to say, opening people to other views can be rather difficult work.

So back to the original question, is complexity an excuse or evidence of lack of understanding? I’ve often found it can be both and with that in mind, the wise evaluator, interested in understanding the theories of an organization or program, will continue to try to get their team to “simply” their model of their theory. It is in that simplification that real and difficult discussion occurs that provide insights as to what the organization or program is trying to accomplish and how.

Also, please note that at no point did I say that complexity isn't a part of everything we do - it most certainly is. However, experience would indicate that when we think about what we do and how we do it, our mental models are significantly more complex than reality. Further, our perceptions of what we do and why is often colored by how important we want to feel and how much we desire others to understand how difficult it is to be us. To those of you who fight to help teams tease out the try complexity from the self-generated complexity… To those of you who struggle to bring clarity to a complex world… Thank you!

As always, I'm open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Thursday, January 5, 2012

If Your Friends Were Jumping off a Bridge, Would You Do It Too?

Ok, show of hands – how many of you had parents that asked the title of blog or a similar question when you were a teenager? I’m looking forward to a number of years from now when I utter those words out loud. Truth be told, I have had several opportunities in the past years to say something similar when working with organizations around evaluation.

It doesn’t take much to recognize that I’m a strong proponent of evaluation. Would a guy who didn’t think evaluation was important call himself The Evaluation Evangelist?... However, I am also a strong proponent of use of the information gleaned from an evaluation AND very much against wasting resources on creating information that will not be used.

I’m going to ask you to raise your hands again here… How many of you have been asked to design an evaluation and when you ask your client those fateful words – “what would you like to learn?”, you get a response of a blank look, confusion, or something to the effect of, “we don’t know, we were hoping you could tell us”? Trying to get more information, you might follow up with a question like, “why do you want an evaluation done?” and get the response of “the funder wants one”, “we are supposed to do an evaluation”, or the like. More often than not, I find myself on the receiving end of one of these responses.

As consultants, do you find yourself trying to design an evaluation for a client that doesn’t know what they want or why they are hiring you to do the evaluation? As program or organizational leaders, are you finding yourself hiring evaluators without knowing what you plan to get out of the evaluation? My guess is that at least some of you are nodding your heads or at least remembering a time when you might have found these to be true.

So, why is evaluation so popular these days? As people interested in the promotion of evaluation, why should we care as to why evaluation is popular and just enjoy the fact that interest is increasing? As an evangelist, shouldn’t I just be content that people are now asking for evaluation and thus I’m employed to help them understand what evaluation can do for them? To this, I must answer an emphatic NO!

Evaluations done just because a funder requires it or because the leadership has heard or read somewhere that it is a good thing to do (or worse, because it just is something one must do) will end up not being used. At best, the contracted or internally hired evaluator might be able to work with the organization to identify evaluation questions – but in the end, the organization needs to be the one driving the questions.

Metaphorically – think of the joke about the drunk that has lost a quarter and is looking under the streetlight. Along comes a guy who asks the drunk what he is looking for and the drunk tells him about the quarter. The guy asks the drunk where he lost the quarter and the drunk points off in a direction and says, “over there”. When then asked why he is looking under the streetlight, the drunk says, “the light is better over here.” I liken this experience to the organization that is asking for evaluation without guidance. In this case, the drunk (the organization) wants help to find something and the guy (the evaluator) winds up having to ask all sorts of questions that may unpack an issue to address.

But it can be and often is worse… For these organizations often don’t have evaluation questions formulated, it is as if the drunk is searching for something, but doesn’t know what it is. He may actually have never lost the quarter in the first place. As such, the helpful evaluator might find a different quarter, a dime, a stick of gum, and a rusty bolt on that sidewalk as well. All these things might be useful in some ways to the client, but since he doesn’t know what he is missing (if anything), he may not value the findings. As such, the evaluation findings are not used.

Now, some may argue that there are situations where having evaluation questions on the front end isn’t a good thing. Perhaps those situations exist, but even then, I would hope that there is some reason for engagement in evaluation other than just because it is done or others are doing it.

So dear reader, I leave you with a thought for the next time you consider an evaluation (either requesting one or supporting one). Think to yourself, why are you on the bridge and why are you considering taking the leap. Is it because it is in support of thought out evaluation questions or because everyone else is doing it?

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist