Sunday, December 26, 2010

You Give Evaluation a Bad Name

Ok, for a moment, I was going to go with the theme again of linking a song to my blog. I’ll spare you the Bon Jovi references -- for those of you who aren’t 80’s music types – the song is You Give Love a Bad Name by Bon Jovi.

However, the title has import in this case as Evaluation has had been given a “bad name” several times over the years. However, there is a new “threat” to evaluation – specifically, the idea that evaluation somehow causes foundations to become risk adverse or to put it more plainly – the idea that if you evaluate, you are less likely to fund higher risk programming. I won’t name names here or the like, but I’ve had opportunity to attend plenary sessions at national conferences where the individuals were not merely implying, they were saying that funders that engage in evaluation are stifling their ability to take risk.

I’ll let that sink in for a moment… No worries, I’ve already written this, so take as much time as you like….

People are paying to hear that evaluation stifles risk and thus innovation. No, I’m not putting words in anyone’s mouth here, there are witnesses that will agree with me and further, unfortunately, agree that evaluation does in fact “prevent” funders from trying risky things.

Well – since I’m writing about this topic, you know I’ve got something to say…

If by risk, you mean reckless funding of programming with no clue what will happen or why or to whom or for what purpose…. Yeah, I guess evaluation stifles that. If you mean blindfolding yourself, plugging your ears, and starting down a black diamond ski slope (nasty things black diamonds, difficult to navigate unless you are an expert) on a snowboard with loose bindings and hope for the best is the type of risk foundations should be taking, then by all means, evaluation is going to hamper it.

However, there is reckless risk and there is informed risk. I actually work with programs that engage in higher risk (as in non-evidence based) programming, but who also have evaluations in place to record the experience of the program as it negotiates the difficult slope of staffing, appropriate development and application of programming, and yes even impact on the participants. Evaluation is the eyes, ears, and to some extent, bindings that can make that risky decent down the slippery, bumpy, slope of programming a bit safer.

Let’s be honest for a moment, funding anything is sort of a gamble. The more risky the gamble, the more informed the gambler wants to be. Let’s take the game of roulette for a moment. It is one of the worst sucker bets you can place in a casino. As a gambler, your only control is where you place your chip. If the wheel is a bit unbalanced, you might catch a pattern – otherwise, it really is random chance. You don’t see a large crowd around the wheel in a casino – why? Because it is nearly impossible to get good information as to what number or color might come up next. Now take a gaze at the poker tables – there’s the crowd. Why? There is a lot of information available to the gambler to enable them to make wiser bets. Granted – there is still the element of random chance, but by using the data available to them at the moment (evaluation!), they are able to make more informed decisions.

So – why are folks clamoring for foundations to head to the roulette table versus the poker table. Why are they implying that it would be a good idea for foundations to go to the top of that dangerous hill and just toss ourselves off without our senses? It is because in fact, they are not suggesting that at all. To them, evaluation is about demonstrating success. High risk investments often fail to pay out. Yet, without those high risk investments, programming will never reach certain groups of people, nor will new innovations occur. Thomas Edison is often quoted as saying – “I have not failed, not once. I have discovered thousands of ways that don’t work.” Yet, after those many thousands of discoveries, he hit upon the electric filament that still is the primary light source for many of us. But it was the learning from those discoveries that led him to greatness.

And so – as always, it is late in the blog that I get to the point…

The people arguing that evaluation is a barrier for innovation are only seeing evaluation as being a way to determine if a program is successful and as such, the reason for a program being cut by a foundation – which is looking for success. They do not know or realize that evaluation can be used to monitor the program internally – as in by the folks implementing the program – to “discover the ways that don’t work” and change the course of the program towards things that seem to be working. As such, their reaction is to blame the evaluation (which, because it is designed to only look to see if there was success or failure at some point) for the eventual cut of funding to the program.

Let me share an alternative viewpoint. Instead of conducting an evaluation to only look for impact of the end of a program. Foundations should support (and yes fund) evaluations of high risk programs that focus on learning about the processes of the program and short term outcomes that can inform the staff of the program as to whether they are moving in the right direction. Innovation rarely occur by accident – rather it is often a stepwise process that incorporates a great deal of trial and error. The unit of these tests should be small enough that they can be conducted within the program’s timeframe, not after several years of work. Thus enabling the gambler or skier or funder the opportunity to see the patterns, terrain ahead, and take informed risks. Evaluation isn’t the bane of high risk programming, it can and does support it – enabling the funder and program implementer an opportunity to learn as they go, fostering great innovation.

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

No comments:

Post a Comment