Wednesday, July 13, 2011

Educating This Generation’s Evaluators

Some of you may know that I am on an incredible learning journey called “Graduate School”. I am nearly a year into this experience and can honestly say that my own thoughts around theory and practice have been influenced this past year. Clearly, the work of Claremont Graduate University as well as Western Michigan and a few other schools are brining a focus and commitment to professional evaluation otherwise not found. They are creating masters and doctoral “prepared” professionals into the world to engage nearly anyone they can in evaluation. If you want to get a sense of how important I think that is, just skim through the titles of my previous postings over the years. I think there is room for more of these institutions around the globe as my own experience of being the ultimate commuter student to pursue my own PhD has taught me. You see, there wasn’t a school nearby with a strong evaluation focused program in which I could expand my own knowledge and expertise. There wasn’t the community of thinkers locally available. So, first with Claremont’s Certificate Program and later my application and acceptance to the Graduate Program – I found my community. However, I think the stars aligned and I was lucky. Claremont and just started the Certificate Program (I was in the first cohort) and if it wasn’t for the vision of the leadership of Claremont’s Psychology Department with Stewart Donaldson at the helm, I would be stuck, wishing.

As you can probably guess, I have an idea… Well, a few anyway.
1) Online programs have a bad reputation in the academic world. There is a viewpoint that they are not as rigorous as residence programs. This viewpoint needs to change. Online participation in residence programs is now possible – my experience is case in point. In fact, there are times that I believe I get a superior experience to the resident in the classroom, having access to a teaching assistant with whom I can discuss thoughts and ideas that occur to me during the class that I wouldn’t want to disrupt the class in their vocalization. Granted, my experience is a bit different than other online experiences – perhaps in the area of requirements. But, with a bit of effort, the technology is currently present to maintain those requirements – even when the student is thousands of miles away from the campus. I suggest that the schools that educate and train professional evaluators examine this idea more closely and experiment.

2) Workshops at conferences, institutes, and the like are good entrĂ©es to topics, techniques, and theories of evaluation – but that about covers it. The onus is on the “student” to seek out additional venues of learning, usually books or websites. AEA has done some fantastic things to offer more information to members in the form of AEA365, its Linkedin group, EVALTALK, and others. EVALTALK was my link with the evaluation community, a place to ask questions from time to time and Linkedin as assumed some of that role as well. AEA365 provides great tips and links to useful ideas – but there is still something missing, an organized, progressive training opportunity for evaluation professionals.

On a daily basis, I work with both amateur and professional evaluators. Frankly that differentiation is unfair. I work with folks along a spectrum of evaluation knowledge and skill. I engage academics that have poor evaluation knowledge and skill as well as academics that are highly knowledgeable in this arena. [At some point, I will write more about the differentiation between content experts and evaluation experts – something a good number of nonprofits and funders misunderstand.] I also engage individuals with bachelors and masters degrees in fields not traditionally associated with evaluation or research that are highly knowledgeable and yes there are those with little knowledge in this category as well. Sending all of these people to a workshop to learn aspects of evaluation is not going to go far in improving their abilities. They need more support than that.

My own work in this area is leading me to a coaching model for engaging and training those lower on the evaluation knowledge continuum. In such a model, technical assistance in the more traditional forms of workshops and one-on-one training occurs – but that the “instructor” or “coach” continues to have contact with the “students”, providing continual education as needed for the “student”. Like a player of a coached team, the “student” receives the training and then is allowed to “play” (conduct appropriate evaluation work at their level) with additional mentoring and advice from the coach. Occasionally, the “student” returns for training (again, envision a team practice) for additional skills/knowledge development. We are testing this in a few projects I’m associated with and if you happen to attend this year’s AEA conference in Anaheim (http://www.eval.org/eval2011/default.asp), you are most welcome to catch a presentation sharing our experiences with this in one organization and where our theory of capacity building has evolved.

However… This still leaves a large gap in the education of evaluators – specifically the group I would call semi-professionals. These are the people on the middle of the continuum that have perhaps a master’s degree or even a strong research focused bachelor’s degree. They often have been practicing evaluation for a shorter period of time and if they are lucky, work in an organization with a more experienced and/or better trained evaluator. But often, they are not – and they are looking for additional educational opportunities. They may sign up and attend workshops on topics, but as mentioned earlier, these are just teasers relative to the depth of focus found in a graduate level course on the topic. Oh – and the reason I can speak about this is this was me many years ago and as I mentioned, I eventually got lucky. But until I got lucky and was able to find a program that was a good fit and allowed me to stay in my profession – I did what most of these semi-professional evaluators do. I attended workshops, conferences, and read books, journal articles, and posed my questions on EVALTALK. And honestly, it wasn’t enough. Yet, with exception to a few opportunities, there really is not much out there for the advancement of people falling into this category. Some are early enough in their careers that they can make the move to a direct residence program. In my case, the residence program accommodated me. But, there need to be more opportunities like mine – otherwise, we are leaving the semi-professional evaluators to their own devices with little support.

Do you have ideas to how to build evaluation capacity and knowledge? Please share!


As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, February 23, 2011

Language and Evaluation

A great many people have spent a great deal of time thinking about what differentiates evaluation from research. I won’t press too far in this, other than to share that if you Google for the statement “difference between evaluation and research” as of the posting of this Blog, you would get over 5000 hits. Now, I’m sure there is much repetition in the form of quoting others and the like, but still – 5000 pages in which that statement occurs. Well, I’m going to talk about one aspect that affects my life almost on a daily basis – issues of language.

The current state of the art of evaluation suggests that a good evaluator is one that engages his stakeholders. What does that really boil down to? You have to talk to the stakeholders. Now, stakeholder is actually a very large word – and no, I don’t mean the number of letters or the fact that it might be a higher order vocabulary word for the SAT, ACT, or GRE. Rather, the concept of a stakeholder can spread across many different groups of individuals depending upon what sort of approach and philosophy you have about programming and evaluation. I’m not going to go into the various possible combinations, but suffice to say that you can be dealing with people who fund the program, implement the program, participate in the program, are not participating in the program, are in the community where the program is implemented, are not in the community where the program is implemented, and on and on and on. The combinations aren’t so much important to this Blog post as is what consists their background, understand, and vocabulary.

A few years ago, I had a discussion amongst individuals who all work for the same organization. These individuals all held the same position within the organization. This organization used and currently still does use the word – Objective. When asked what the word meant, the broadest definition could be – what the program is trying to achieve. However, that is where things broke down. For some, Objective meant a Programmatic Outcome. For others, Objective equated to a Programmatic Output. Still for others, an Objective was an Organizational Outcome. And for yet another group, it was a change in Organizational Infrastructure. All were focused on “Measureable Objectives”, but no one really agreed on what an Objective was. After a year’s worth of discussion and negotiation, we came the agreement that and Objective would be a Programmatic Outcome or Organizational Outcome. At least we got it to an “Outcome”.

When was the last time you had to have a discussion about the language amongst researchers? Ok, those of you who do language research, put your hands down! You get my point, I hope…

But the point was driven home to me again today. In a meeting with folks that I met with, along with another three evaluators, we discussed an evaluation project we are designing. During this meeting, I uttered another word, that I thought we all understood – “Comparison Group”. And was shocked to discover that their impression of what the term meant and my own impression and that of the other evaluators diverged. When they heard “Comparison Group”, they translated that to “Control Group”. They had a decent definition of a Control Group and we all know that engaging a Control Group for a study can require significantly more resources than engaging a Comparison Group, especially when the Comparison Group is not individually matched.

[Pausing for a moment here, because my own language may differ from your own… Control groups are usually associated with random control trials (RCT) and the costs of engaging in a RCT in community based programming and evaluation are very high. Control groups are a subset of comparison groups, which are just a group with whom you compare the outcomes of the group that experienced your program.]

The meeting around this study was rapidly devolving and the design was in jeopardy until I figured out that this was a language issue and not a design problem. The team had agreed to the design. They were under the impression that I was forcing a more rigorous study that would be costly across several domains. I was under the impression that they were stepping back away from the design and wanting something significantly less rigorous. Conflict was brewing. Fortunately, the issue of language was identified before things spun out of control.

I’ve presented the idea before and I’ll present it again. We need better-informed consumers of evaluation. Too often, I find myself and other evaluators changing language and/or dropping evaluation vocabulary out of discussions to attempt to avoid misunderstandings. I’m starting to wonder whether we are doing our clients and ourselves a disservice for this. In our own desire to make things easier for everyone in the short-term, we might be causing issues for the next evaluator. Worse, like the discussion around the term Objective, our looseness of language might cause more confusion. I’m considering short study for myself – to keep the evaluation language in and attempt to be more precise in my definitions with my clients – to see if I can reduce confusion. Anyone else want to give this a try? I would also like to hear your thoughts on the idea.

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, February 9, 2011

The Man in the Middle

I started writing this post about two days ago and discovered rather quickly that I was writing more than should fit in one Blog post – so… Instead I’m going to subject you to a series of posts discussing the rationalization of the wants/needs of the funder around evaluation with the wants/needs of small to medium sized nonprofits with whom I’m familiar. Tossed in will be some reflections on some reading I’ve been doing for school and of course, you get my own opinions and thoughts – full force!

To begin, I would suggest you take a look at my Blog post in January (2011 if you are reading this years from now – hello future people!). Go ahead – I’ll still be here when you get back…

So, you read my comments about my frustration with the lack of outcomes information coming to me from organizations soliciting me for donations. Well, those winds of change came quick and a partner in funding is looking for outcomes as is my own Board. My CEO, who gave me the name of the “Evaluation Evangelist” has pointed out to me a few times – “a prophet is rarely heard in his own land” and my previous warnings about nonprofits and foundations needing to attend to outcomes (versus outputs and other processes) was unheeded. And as with all crises, I think we are at the beginning of a change.

Before I go further, I should tell you that while I believe that we should always consider outcomes of programs, projects, advocacy, and whatnot – there is a time and place for evaluating said outcomes. This is tied to the questions the stakeholders have for the evaluation and what is possible to measure, given the theory of change of the program. Today, the “top” funding stakeholders are asking for outcomes and unfortunately, because of their attention, nonprofits are going need to react. Why do I say, “unfortunately”? - Because the interest in programmatic outcomes didn’t originate in the nonprofits delivering the program.

Granted, I have access to a small number of nonprofits, but in their study of nonprofits – Reed and Morariu (www.innonet.org) found that more than 75% of the nonprofits spent less than 5% of their budget evaluating their programs – 1 in 8 spent no money. Additionally, funders were characterized as the “highest priority audience for evaluation” and surpise – outcomes and impact evaluation were rated as the highest priority. So, my experience with nonprofits, while small, does seem to echo the broader population of nonprofits.

So, if this has been as it always has been – otherwise we wouldn’t have the results of the Innovation Network’s State of Evaluation 2010, why would I be concerned? Sure, I have been an advocate for evaluation use and just because I’ve been advocating for it (bigger names than mine have for a lot longer), that shouldn’t affect change. In fact, one could argue that I should be pleased – interest in evaluation is increasing in the funding community. Except, there is little education for the funding community around evaluation. There is little use by the funding community around evaluation. And the expectations that are coming out of the funding community are the equivalent of taking an older car that has never gone faster than 20 miles per hour and slamming on the accelerator to go 80 miles per hour (for those of use that use metric, you can substitute KPH and still see the analogy). Nonprofits that had at best conducted some pre-test/post-test analyses of knowledge change in participants in their program (more likely did a satisfaction survey and counted participants) are now being required to engage in significantly more sophisticated evaluations (ranging from interrupted series designs to random control trials). The level of knowledge required to conduct these types of studies with the implied rigor associated with them (I say implied if only because I can find a comparison group for anything – it just might not be appropriate) simply does not reside in most nonprofits. They haven’t been trained and they certainly don’t have the experience.

The funding community’s response is to offer and in some cases require an external contractor to support the evaluation. This could lead me to talk about difficulties in finding qualified evaluators, but we won’t talk about that in this post. It is an issue. However, what occurs with the involvement of an external evaluator? They do the work to support the funder’s objectives and after the funding for the project ends – they tend to leave too. There is also an issue around funding the evaluation at the level of rigor required – that too will come in another post. But, the message I want to leave you with here is that engagement of an external evaluator does little to increase the buy-in, much as less, capacity for the organization to engage in internal evaluation. The “firewall” preventing bias of an internal evaluator (e.g. organizational pressure to make the organization look good), while certainly improving the perception of the funder that the evaluation is more rigorous, does little to help the nonprofit other than to aid them in maintaining the current cash flow. [Incidentally, I’ll address the internal versus external evaluator conflict in a later post as well. I think this is something we can all use to explore.]

So – what am I advocating for? Let’s not take that older car on the highway just yet. Let’s listen a bit more closely to evaluation thought leaders like David Fetterman and consider what we can do to improve the capacity for organizations to do their own evaluations. Let’s show them how attending to outcomes might help them improve their organization and the services they provide to their participants. Perhaps we should think about evaluation from the standpoint of use and then apply the rigor that is reasonable and possible for the organization. Bringing in an external evaluator that is applying techniques and methods beyond the reach of the organization results in something mostly for the funder, not for the nonprofit. At best, the nonprofit does learn something about the one program, but beyond that – nothing. To them, it could almost be considered a research study versus an evaluation. Let’s partner with the nonprofits, get them up to the speed we want to get them, with careful consideration and deliberation versus just slamming on the accelerator.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Tuesday, January 4, 2011

Answers To Nothing

Yes, once again I am going to subject you to a musical reference – this time, it is Midge Ure’s song, ANSWERS TO NOTHING. The lyrics and a YouTube video can be found here - http://www.lyricsmode.com/lyrics/m/midge_ure/answers_to_nothing.html

The underlying story of the song is that of a disillusioned and now cynical individual who has heard attention grabbing stories from respected individuals who have told him they have the answers to his most important questions, but has found those answers lacking in substance.

As with many of my Blog posts – I find that my daily life “excites” the evaluator in me. For those of us in the United States of America, the last month or two of the year marks a time where we receive many communications in the form of email, phone calls, and postal mail – all soliciting for donations to support any number of causes. The reason for the timing and intensity has to do with the tendency for many of us to wait till the end of the year to make donations, inspired by our relative wealth and remembering that these donations can be tax deductible. In the Evangelist household, we find ourselves making our decisions based upon the causes that most interest us and as such, many of these solicitations go unheeded.

However… You knew there was going to be a however here, did you? What do these solicitations have to do with the topic of this Blog?

Perhaps a few of you have already sifted through your memories of what you received this year and more importantly, the content there of – and know where this is going. I’ll assume there is someone out there who either doesn’t receive many of these or has not paid much attention to them.

The majority of these solicitations talk about what the agency or organization does – in other words, they tell me how they are spending their money. A smaller percentage will tell me a story of one of the people their programming touches. But few if any will talk about the impact they are making on that person’s life or the outcomes of their programming. They share stories about need. They share stories about what they do. They don’t talk about change in lives.

The evaluator in me applauds that they can talk about need – they have clearly done some form of a Needs Assessment. The evaluator in me is even happier when they can clearly describe their program and the number of people touched by the program. However, the funder in me and to some degree the evaluator, is disappointed that they can’t tell me what sort of change they are affecting.

In my daily life reviewing evaluations of programs, I find a continuum of depth. On one end are the evaluations that focus on describing the process of the program (e.g. number of people attending a training session, number of fliers distributed) and on the other end – far far away, I occasionally see evaluations that include a description of the process of the program, but also speak to measures of change in important outcomes and that relative difference found in the same measures for individuals that didn’t participate in the program. I honestly get excited when I see an evaluation design that is simply a measure of change for participants of a program (without a comparison group). It is rare enough compared to the description only evaluations that I often see. As a funder, these are nice, but they really don’t satisfy – they are often Answers To Nothing as they aren’t the question the funders most often ask.

Peter York of the TCC Group had an interesting take on the interests of individuals engaging in charitable giving. He posits that the mindset for donations has been on buying units of activity, not on impact. The impact has always been assumed. As a result, the solicitations often contain information about how low the overhead is of the organization asking for money and focus on what you are buying. Up until recent years with increased scrutiny being directed towards larger funders and an interest by the public in seeing results, government and funding organizations were also interested in what they bought. However, like the disillusioned young man of the song, they are becoming less interested in the story and more interested in getting answers – answers tied to measurable change (outcomes and impact).

ANSWERS TO NOTHING was published back in 1988, the lyrics were clearly not targeted for nonprofit leadership ears, but as a funder interested in outcomes and impact, and as an evaluator that is interested in helping organizations improve their programs as well as get support I leave you with the refrain from the song:

Oh, oh, oh, lied for the last time

Oh, oh, oh, died for the last time

Oh, oh, oh, cried for the last time, this time

Oh, oh, oh, believed for the last time

Oh, oh, oh, deceived for the last time

Oh, oh, oh, believed for the last time, this time

As a funder, I’ve grown cynical – while I might not go so far as to believe the solicitations contain deception and while my heart cries at the needs – I no longer just want to know what you are doing, I want to know what change you make and I’m not alone. Other individual and larger funders share my position. As evaluators, we need to work with organizations to not only improve their programs, but to help them tell the story so that their Answers are Meaningful.

As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist