Sunday, December 26, 2010

You Give Evaluation a Bad Name

Ok, for a moment, I was going to go with the theme again of linking a song to my blog. I’ll spare you the Bon Jovi references -- for those of you who aren’t 80’s music types – the song is You Give Love a Bad Name by Bon Jovi.

However, the title has import in this case as Evaluation has had been given a “bad name” several times over the years. However, there is a new “threat” to evaluation – specifically, the idea that evaluation somehow causes foundations to become risk adverse or to put it more plainly – the idea that if you evaluate, you are less likely to fund higher risk programming. I won’t name names here or the like, but I’ve had opportunity to attend plenary sessions at national conferences where the individuals were not merely implying, they were saying that funders that engage in evaluation are stifling their ability to take risk.

I’ll let that sink in for a moment… No worries, I’ve already written this, so take as much time as you like….

People are paying to hear that evaluation stifles risk and thus innovation. No, I’m not putting words in anyone’s mouth here, there are witnesses that will agree with me and further, unfortunately, agree that evaluation does in fact “prevent” funders from trying risky things.

Well – since I’m writing about this topic, you know I’ve got something to say…

If by risk, you mean reckless funding of programming with no clue what will happen or why or to whom or for what purpose…. Yeah, I guess evaluation stifles that. If you mean blindfolding yourself, plugging your ears, and starting down a black diamond ski slope (nasty things black diamonds, difficult to navigate unless you are an expert) on a snowboard with loose bindings and hope for the best is the type of risk foundations should be taking, then by all means, evaluation is going to hamper it.

However, there is reckless risk and there is informed risk. I actually work with programs that engage in higher risk (as in non-evidence based) programming, but who also have evaluations in place to record the experience of the program as it negotiates the difficult slope of staffing, appropriate development and application of programming, and yes even impact on the participants. Evaluation is the eyes, ears, and to some extent, bindings that can make that risky decent down the slippery, bumpy, slope of programming a bit safer.

Let’s be honest for a moment, funding anything is sort of a gamble. The more risky the gamble, the more informed the gambler wants to be. Let’s take the game of roulette for a moment. It is one of the worst sucker bets you can place in a casino. As a gambler, your only control is where you place your chip. If the wheel is a bit unbalanced, you might catch a pattern – otherwise, it really is random chance. You don’t see a large crowd around the wheel in a casino – why? Because it is nearly impossible to get good information as to what number or color might come up next. Now take a gaze at the poker tables – there’s the crowd. Why? There is a lot of information available to the gambler to enable them to make wiser bets. Granted – there is still the element of random chance, but by using the data available to them at the moment (evaluation!), they are able to make more informed decisions.

So – why are folks clamoring for foundations to head to the roulette table versus the poker table. Why are they implying that it would be a good idea for foundations to go to the top of that dangerous hill and just toss ourselves off without our senses? It is because in fact, they are not suggesting that at all. To them, evaluation is about demonstrating success. High risk investments often fail to pay out. Yet, without those high risk investments, programming will never reach certain groups of people, nor will new innovations occur. Thomas Edison is often quoted as saying – “I have not failed, not once. I have discovered thousands of ways that don’t work.” Yet, after those many thousands of discoveries, he hit upon the electric filament that still is the primary light source for many of us. But it was the learning from those discoveries that led him to greatness.

And so – as always, it is late in the blog that I get to the point…

The people arguing that evaluation is a barrier for innovation are only seeing evaluation as being a way to determine if a program is successful and as such, the reason for a program being cut by a foundation – which is looking for success. They do not know or realize that evaluation can be used to monitor the program internally – as in by the folks implementing the program – to “discover the ways that don’t work” and change the course of the program towards things that seem to be working. As such, their reaction is to blame the evaluation (which, because it is designed to only look to see if there was success or failure at some point) for the eventual cut of funding to the program.

Let me share an alternative viewpoint. Instead of conducting an evaluation to only look for impact of the end of a program. Foundations should support (and yes fund) evaluations of high risk programs that focus on learning about the processes of the program and short term outcomes that can inform the staff of the program as to whether they are moving in the right direction. Innovation rarely occur by accident – rather it is often a stepwise process that incorporates a great deal of trial and error. The unit of these tests should be small enough that they can be conducted within the program’s timeframe, not after several years of work. Thus enabling the gambler or skier or funder the opportunity to see the patterns, terrain ahead, and take informed risks. Evaluation isn’t the bane of high risk programming, it can and does support it – enabling the funder and program implementer an opportunity to learn as they go, fostering great innovation.

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, December 1, 2010

This Is My Beautiful House!

Twas the night before Thanksgiving and all through the house, the Evangelist Family was toiling, even the mouse! We were preparing to have family over for a Thanksgiving Feast and the house needed to be spruced up, floors needed vacuuming, and most importantly – food was being prepared for the next day. About 5 minutes into laying out the necessary ingredients for the stuffing, I discovered that we were a half-pound short of mushrooms. So off to the store I went and this latest stream of consciousness was born. I have Sirius Radio and was listening the 80s on 8. Now, perhaps showing my age here, I attended high school in the 1980s and my formative driving years were during that time. In the region I grew up in, winter can be marked by heavy fog – a fog very similar to the one I was currently experiencing as I listened to the music of my youth. For a brief time, driving from my house to the grocery store, I experienced a vivid sense of déjà vu (ok, it really wasn’t déjà vu, but you get the idea) mixed with happy memories. On the way back from the grocery store, still in that same fog, I found myself reflecting on my journey to where I now live and the profession I now call my own. Children of the 80’s might have already caught the title of today’s Blog – a reference to a Talking Head’s song – Once in a Lifetime. If you had talked to that 16-year-old kid tooling around the neighborhood in the fog in the 1968 Buick Skylark (yes, I knew how to roll back then), he would have scoffed at the idea that over 25 years later he would be Director of Evaluation for a Health Foundation and that he would be so passionate about evaluation that his own CEO would brand him the Evaluation Evangelist. Back then I wanted to fly airplanes. Dreams of tooling around above the fog and clouds filled that kid’s head back then versus today’s dreams of tooling around various clusters of data and information to pull organizations out of the fog of programmatic and organizational complexity.

So, to borrow another line from the Talking Heads – Well… How did I get here?

I think my story is not all that unique in the world of evaluation. While there is a current generation now waking up in college and deciding that they want to pursue masters and doctoral degrees in evaluation, back then my generation was waking up and choosing to study social sciences and education. For me, it was “worse”, I still wanted to fly when I went to college. It was a series of events in my life, all tied to the desire to eat and pay for my education that eventually led me to evaluation. My junior year of college or as my family refers to it as the 3rd year of my undergraduate career, I found myself with out funding for school and a recognition that flying wasn’t in my future. For the next 3 years, I experimented with different majors and held different jobs. My first flirtation with psychology, the social science that would eventually claim the dubious honor of being my bachelor’s degree came as a result of watching two kids interact with each other when I was a childcare director. I felt this need to solve the puzzle of their behavior. So, I was drawn to a class, which led to many more classes and the degree. Flying had been replaced by puzzle solving (with people’s behavior in mind) – something that I enjoyed most of my life. But it all clicked months after my advisor, perhaps seeing how gaunt I had become living off raman noodles and whatever was on sale at the supermarket asked me to work with someone on an evaluation for the state of Missouri. I apprenticed those two years, bringing my knowledge of research methods and statistics to the table and learned about politics, working in the world versus a lab, dissemination of information, and that my work could make a positive difference in people’s lives. That project and the one following it hooked me. Here was an opportunity to solve really complex puzzles and make the world a better place. It was much more fun than flying or just trying to resolve human behavior.

With my return to graduate school this year – oh, did I forget to tell you, I’m back in school again? I’m working towards a doctorate in evaluation and research methods at Claremont Graduate University – it is a great program – look it up if you are interested or email me, I’ll happily tell you about my experience. Anyway… These past few months have had me thinking about what describes a good evaluator and whether I’m really a good evaluator or not. What I have learned is that it is a good thing I’m back in school. There are holes in my education. There have been advances in technique and statistics since I took those courses in the early 1990s. I’m being challenged to think beyond the ruts I formed as a practitioner. But most importantly, I’ve the opportunity to talk to others about my ideas and hear their own thoughts. I’ve started to surround myself with individuals that share some common values.

• They are interested not only in not harming people with their work, but actually improving their lives.
• The seem to be puzzle solvers like myself – although some have “interesting” and different approaches to the solutions of their puzzles.
• They are honest with each other and while sometimes brutal in their observations, intentional with their desire to help one another.
• They are in a program to improve their competence as evaluators.

In other words, they embody many of the values imbedded in the American Evaluation Association’s Guiding Principles. It is not the adherence to these principles that make for a good evaluator. Rather, they describe the outward signs of the internalization of good evaluation values.

Letting the days go by

Back to my journey into the world of evaluation, much like others, I found it be accident. It wasn’t something that I grew up wanting to do, but I would argue that it was something I was born to do. [Woah! Did I really say that?] I’ve been a member of AEA since the late 90’s, perhaps not as long as others, but I can tell you that each conference I’ve attended has felt like a home coming. I knew that evaluation was something I wanted to do as I worked on my first evaluation. The few opportunities I had to meet and talk with other professional evaluators were always more comfortable than with any other group of people. There has always been that sense of a good fit. Returning to graduate school to study and explore my own thoughts about evaluation has been a certain homecoming.

This is my beautiful house!

Now that you’ve read my little affirmation as to why I’m happy to be where I am, I’ll tell you that I still haven’t landed on the notion that I’m a good evaluator. I think I embody the values. I think I do good work. But I also know that there is more to learn.

Why have I shared all this with you? Well… Have you thought about becoming an evaluator? Clearly you are reading this for some reason. What draws you towards evaluation? Are you one of us puzzle solvers? Do you want to help others?

If you are an evaluator, perhaps my tour down memory lane will remind you how you got into the profession and why you stayed.

For both groups, I would be interested in hearing your story. What interests you and draws you to evaluation. And – well… How did you get here?

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, November 17, 2010

My Return

I think it has been over a year since my last post here. Let’s just say I’m not a prolific blogger and leave it at that ok? What I have learned is that I blog when I think of something important that I would like to discuss with you dear reader.

So, what got me to return here after 16 months of just twitter comments? - The American Evaluation Association’s (AEA) annual meeting/conference/party. It is there that you will find the majority of the great thinkers in the field, their students, and the practitioners that attempt to make sense of the world. There is a level of intellectual discourse that differs from any other conference I’ve attended. It is a homecoming of sorts for most of us, who struggle and contemplate how best to evaluate and inform the various communities and organizations we serve.

It was there – last week, that I had a moment of crisis. Oh it was coming – it had been building like a slowly evolving migraine. And on Wednesday, in a session I was attending, it exploded across my poor brain. Evaluators proposing the notion that evaluation should inform decision making in as near real-time as possible. At once I knew that I was in trouble…

You see dear reader, my roots are in applied social psychology – empiricism reigned supreme in my thoughts around anything having to do with research methods. I’m like many evaluators, finding the profession through another and my past clearly colors my future. However, the shade of tint has been changing over time. As you have read my previous blog posts, you probably know that I’ve also been colored by an experience in quality management and a need to help programming succeed. That flavor has affected how I go about my practice as an evaluator as well.

The two views competed with each other a bit and one could argue I was a “mixed method” evaluator in that I craved the “certainty” (yes, I know it really isn’t certain) of empiricism and the impact on program and organizational improvement that more interactive forms of evaluation can provide. I would flip back and forth and to be honest, I still oscillate between the two like a strange quark, vibrating between these “approaches”. But, it wasn’t until my moment of panic today that I noticed how quickly I quivered.

And so dear reader, I come to you. In confusion and admittedly some fear. You see, in my role as Director of Evaluation for a foundation, I want it all. I’m sure my fellow staff members want it all. My Board wants it all. And I think my grantees want it all too. We want to know what “conclusively” works so that we can generalize these learnings to other programs and projects we fund. We want the evaluation “results” (outcomes) to inform our future grantmaking. We want good programmatic ideas to spread. The empiricist in me argues that the evaluator needs to be the critical friend that watched the program get dressed, go out on its date, and succeed or fail without providing any advice.

But, our desire to see the programs we fund succeed, we also want to be that critical friend that after seeing your outfit, suggests that you change it before going out and observes how the date is going and provides ideas of different topics of “small talk” or notices that the place doesn’t work for the person you are with and suggests alternate places to go for the date. We want that date to succeed. We want that program to succeed. But we also want to know at the end of the date whether the whole package works.

Peter York of TCC Group made an interesting observation in a session at AEA. It was in reference to a different issue, but somewhat related. I am curious to hear more from him on his thoughts, but it got me thinking. What if we broke the program or date into smaller parts instead of evaluating the whole thing? The solution allows for more interventional evaluation (preventing you from continuing to talk about your last significant other and suggesting other topics to discuss – like the weather) and maintains some of the possible strengths of empirical rigor. By chunking the process of the program into smaller parts, there is a more rapid cycle of reporting and an opportunity to improve the program.

This only gets us so far. We have to have evaluation questions that are only focused on the components, which have to be time-specific. This might actually be good from a generalizability standpoint as few programs are into copied lock, stock, and barrel. Rather, based upon the context of the environment and resources available, components of the program are implemented.

There is another issue as well - specifically, the “intervention” of the evaluation (assisting with identifying issues with the program and providing suggestions for changes). One great argument against this is that the program has been “tainted” by the process of evaluation and is no longer pure. Here’s where I’ve landed on this topic this morning:

• Programs change over time with or without formal evaluation. They change for various reasons – one being because someone has made an observation that things aren’t working as they would expect. Why is it so wrong that someone not be better informed by a system that has more rigor?

• As I mentioned above, the programs change over time. This is a problem faced by longer-term empirical designs and frankly is ignored often in these discussions. Live programs are not like the labs where much of social science is conducted – things happen.


Huey Chen made an interesting observation in a presentation this past week at AEA. At the time, he was discussing the idea that random control trials (RCT), while appropriate at certain points in evaluation practice, are better conducted when issues of efficacy and validity are addressed in previous evaluations. Taking his comments further (and of course without his permission at this point), I would argue that evaluation focused on program generalizability should only be conducted after a meta-analysis (in the broadest form, not the statistical method) indicates that in fact, the whole program might work across multiple domains and contexts.

So – where does this all leave me in my crisis? I should tell you that I’m feeling better – much better. You see, it really comes down to the evaluation question and whether that question is appropriate. The appropriateness of the question is more tied to timing and results of previous evaluations. If we are talking about a new program, it is important to conduct interventional evaluation – we are collaborating in building the best program possible. In more mature designs that have been conducted in a few places, assessment of the programmatic model now makes more sense and a more empirical model of evaluation would be more appropriate. It is all about timing and maturity.

Funders still want it all and so do I. How do we allow a funder that only is interested in starting new programs the opportunity to say that their program idea should be replicated, yet allow for interventional evaluation as well? I’ve three criteria here:

• Fund multiple programs (and no, not just 5).

• Fund internal interventional evaluations for each program.

• Fund a separate initiative level evaluation that can look across all the programs, their contexts, and the organizational interventions. (Including the interventions of the internal evaluations).

In this case, there is a different focus/viewpoint of programming. For as long as I’ve been an evaluator, there has been this constant complaint that organizations do not internalize evaluation – that they do not integrate evaluation into programming. Here is the opportunity to build that framework. Evaluators complain that evaluation is viewed as separate from programming – yet the whole empiricist view of evaluation would place evaluation outside the program – an observer looking through the glass and watching to later remark on what happened and perhaps why. “Evaluation” is being conducted by amateurs on a daily basis to run their programs – why don’t we empower them to do it right? Empower them to integrate better evaluative practices into their programming? And then recognize that evaluation is an integral part of programming, seeing it as an operational activity that affects programming in similar manners as staffing, networks of support, and environment – which we already consider appropriate evaluands?

Michael Scriven talks about it being time for a revolution in evaluation. Perhaps it is time to drive the spike in to connect the rails that he and David Fetterman have been building. Perhaps it is time to agree that interventional evaluation and the more empirical forms of evaluation can coexist much like we have found that qualitative and quantitative methods can coexist and in fact enhance one another through mixed methods approaches to evaluation.

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,
Charles Gasper
The Evaluation Evangelist