Friday, October 23, 2009

Evaluation, Standards of the Profession

Well, it certainly has been some time since I posted here. As I mentioned earlier, my goal is to post twice a month. Clearly, I’m not meeting my milestones. After a review of my process, I have modified my plan and expect you will see more posts in the future, meeting that goal.

Today’s topic focuses on the professional side of evaluation. Up till now, I’ve been presenting the rationale for evaluation and its importance in organizational and program development and maintenance. In the past few months, I ran into an issue that has made me think more about the profession of evaluation and standards. Funders and nonprofits are often faced with the decision of hiring outside their organization for evaluation support. How does one pick an evaluator and how do you know if an evaluator is good?

These are great questions and something that I am still struggling with. I would say that the contractors I work with are fantastic. Yet, the only commonality that they share is that they are university based. Otherwise, they have different backgrounds, different skill sets, and different views on how evaluation should be conducted. Last year, I brought them together to share their work with each other and it is something that our Foundation will continue to do. The differences amongst these contractors were rather striking with some noted below:

· Quantitative versus qualitative methods

· Different focus on aspects of program (e.g. sustainability, quality improvement)

· Different background training and focus (e.g. public health, public policy, education)

However, there was a common factor that all shared. They had training in and followed “research” methodologies tied to their backgrounds. While there are some language differences, individuals trained in public health, psychology, social work, and sociology all have taken basic levels of social science research methodologies. As all of the evaluators are university based, they are required to conform to human subjects rules and pass their designs through an Internal Review Board (IRB). That is a very large commonality and it constrains the work that they do. Further, it develops a form of minimum standard for these evaluators.

But evaluators aren’t just based at universities. There are many independent contractors that provide evaluation services. These contractors can come from similar backgrounds to the ones I listed above, but can also have other backgrounds that vary in education (type and level), philosophy, and technique. Those without social science backgrounds may have different standards of “research” that they have learned. Finally, most of these contractors are not subject to some form of IRB. As a result, there is the possibility of greater variation. The purpose of these thoughts are not to speak to the idea of variation, for I believe that it can be both good and bad, depending on the situation, needs of the stakeholders, etc. Rather, I want to look at this issue from a concept of minimum standards.

So, to identify a minimum standard, we need to all agree on what is evaluation. Again, we can argue this as different cultures have different views on this. Instead, let us assume that you and I have are own definitions with the common idea that at the end of the day, we will have the information we want to have. So, I would argue that the first standard of evaluation is really driven by the needs and wants of the primary and associated stakeholders. In my framework, that means the development of a theory-based logic model of some type that links what the program or project or whatnot is doing with outputs and outcomes that we are trying to affect which will in turn inform my team as to what they might want to know. Additionally, there are other strategic needs that can inform the evaluation design and minimum standard for review (e.g. organizational strategic focus, environmental assessment needs)

Once this first standard of informational need is identified, we now have the minimum standard of what we want to know. The next step is to identify the how and what will be done or some sort of methodological standard. This is where things get a big complicated, but can be teased out/cleaned up.

To begin, there is the basic rule of human subject rules that borrows a bit from the Hippocratic oath – “do no harm”. If some harm must come to the participants, then the benefits of what are learned must outweigh the cost and reasonable efforts must be taken to ensure that the damage is addressed. Incidentally, I would propose that the organizations engaged should also be viewed in this manner. The evaluation should not damage the organization and reasonable efforts should be taken to ensure that any damage is addressed. Unfortunately, I have had an experience in which tenets of this rule were not applied at the organizational level (the aspect of informed consent to the evaluation) and some damage was done and worse, ignored. So, the second standard of professional evaluation should be not to harm the individuals, programs, or organizations engaged in the process.

I should clarify, that the manner in which the individuals and organizations go about applying their evaluation derived information can and should be covered under this as well. It is the evaluator’s responsibility to ensure that the organization that receives the information is given every opportunity to correctly interpret the information. However, beyond ensuring that the information is interpreted appropriately, I don’t bind the evaluator.

The third standard would be acceptance and willingness of the evaluator to be bound by the guiding principles of the American Evaluation Association. - http://www.eval.org/Publications/GuidingPrinciples.asp. In essence, the guiding principles cover the first two standards listed above, but I feel them so important as to separate them out. However, the guiding principles also address in general the concepts of systematic inquiry, including education of the stakeholders on methodology and limitations, evaluator competence, integrity, respect for people, and responsibilities for general and public welfare. While membership in the American Evaluation Association does not indicate that the evaluator considers themselves “bound” by these principles, they have been made aware of them in various forms including –the Association’s website and the American Journal of Evaluation.

Earlier this decade, members of the American Evaluation Association discussed the development of constraints to better define an evaluator. Ideas floated included an exam and some sort of certification. Even this year, membership still struggles with identification and development of a tighter, more distinguishing definition of an evaluator. Again, one can find calls for an Association related certification, but given the breadth of what defines evaluation, a single test or even series has been rejected by other membership. Many universities provide training in evaluation and/or evaluation linked skills - http://www.eval.org/Training/university_programs.asp as well as other organizations that provide professional training and in some cases certification. This patchwork of diplomas, certifications, and training provide something in the area of constraint. One will have a better sense of the skills and training of a graduate of the Claremont Graduate University or Western Michigan’s programs, but it requires the person hiring said evaluator to be familiar with the programs. That means that I, as Director of Evaluation for my Foundation, must be familiar with these and other programs. Fortunately, I’ve been a member of the Association for several years and have had a goodly amount of contact with faculty and graduates of these programs. I have not had much contact with faculty and graduates of American University or California State University, Los Angeles. I have known people to attend the Evaluator’s Institute - http://www.tei.gwu.edu/, but am unfamiliar with their work and know little other than the brochures that lap against my inbox on a yearly bases about that training. So, what is a Director of Evaluation for a foundation to do, or for that matter, a Director of a nonprofit, when reviewing a proposal from a potential contractor?

First, know what it is that you want out of an evaluation. What is the information you want to know about the program(s)/project(s) and document it. It has been my experience that when presented with a vacuum, evaluators will build what they can into the evaluation’s structure. While some serendipitous information that could be of value can be discovered, it is far better to give the contractors a sense of what you and organization wish to learn. This information should be incorporated into the request for proposals (RFP). Second, the RFP should also include a requirement that the contractor agree to and adhere to the American Evaluation Association’s Guiding Principles. Finally, request to see previous work from the contractor, to get a general sense of philosophy and style of evaluation.

In review of these documents, think about your organization and the stakeholders of the evaluation. Do the stakeholders value one methodology for garnering information over another? Will the evaluation provide you with what you want to know? Really, the question is - is the contractor and their evaluation design a good fit for you? That fit, agreement in philosophy, focus, intent, and concern is critical. Without that fit, the most rigorous evaluation design that develops all sorts of potentially useful information will lay fallow for a lack of investment of the stakeholders.

Incidentally, I struggle with selection of contractors for our evaluations, much as others do. I value the diversity that makes up the Association and the profession of evaluation, so I oppose stricter constraints on the use of the title of evaluator. However, the above is the “best methodology” I’ve developed to select contractors.

As always, I'm open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Wednesday, July 29, 2009

Evaluation & Dissemination Can Save a Life

Warning, the following post contains images and discussion tied to the 2004 Indonesian Tsunami.

At 00:58:53 UTC, December 26, 2004, a subduction created 9.1 magnitude earthquake struck off the West Coast of Sumatra. The event was recorded on various seismographs throughout the world and the Pacific Tsunami Warning Center issued an bulletin at 01:14 UTC, December 26, 2004 - http://www.prh.noaa.gov/ptwc/messages/pacific/2004/pacific.2004.12.26.011419.txt. The bulletin was followed at 02:04 UTC, December 26, 2004 - http://www.prh.noaa.gov/ptwc/messages/pacific/2004/pacific.2004.12.26.020428.txt upgrading the magnitude estimate of the earthquake from 8.0 to 8.5. Later, the same center issued subsequent bulletins (December 27, 2004) for the Pacific Basin updating shore areas as to upwards of a half meter's difference in sea level from crest to trough.

The Indonesian Tsunami event resulted in an estimated 230,000 casualties and is an example where a form of evaluation, in this case observational assessment, could have saved lives.

In 1949, the Pacific Tsunami Warning Center was established as a reaction to the 1946 Aleutian Island Tsunami that was tied to 165 deaths between Alaska and Hawaii. The system is based initially on seismic data, which was all that was available to the center at the time of the 2004 Tsunami. No similar center existed in the Indian Ocean with the closest center being in Japan. However, the initial magnitude of the quake did indicate to some individuals that a significant event would occur in the Indian Ocean.

At this point this blog will move from fact to a bit of conjecture as I am unable to find actual evidence that the following is true, but certainly seems plausible.

Apparently, NOAA (National Oceanographic and Atmospheric Administration), a United States Department of Commerce agency attempted to contact governmental officials for countries which border the Indian Ocean to provide warning. Further, as the Tsunami made its way across the ocean, the warning was passed from government to government. Unfortunately, a method to effectively and efficiently disseminate this critical information was not in place and hundreds of thousands of people were caught unaware, with the first warning either being the serious reduction in sea level due to the preceding trough or the froth of the crest on the horizon.

Be that as it may, there are two major facts of this tragedy:
1) Incomplete evaluation systems were available to assess the issue (NOAA knew there was a possibility for a Tsunami, but had no idea as to intensity or direction).
2) There was no dissemination plan for the emergency.

The United Nations has taken steps to ensure that the "evaluation" of sea level be implemented consistently. There are plans for similar systems to be set up in the Mediterranean Sea, North Eastern Atlantic, and the Caribbean Sea. The expectation is that major communities can be given warnings that will allow for timely evacuations. The issue still is dissemination and it remains to be seen how effective the program will be.

There is an even more striking "measure" that wasn't discovered till recently, that jarred my thoughts around evaluation. The Centre for Remote Imaging, Sensing and Processing recently made available images that they unearthed when reviewing satellite imagery on the date of the Tsunami. As can be seen in the attached images (http://www.crisp.nus.edu.sg/tsunami/waves_20041226/waves_zoom_in_b.jpg , http://www.crisp.nus.edu.sg/tsunami/waves_20041226/waves_zoom_in_c.jpg), the waves from the Tsunami were clearly visible - giving a sense of intensity and direction of propagation.

34162CFC-F090-49D1-A881-AF129FA7B909.jpg
504548DD-BC86-41F0-A47F-85265D81403F.jpg


The system, which had the trip wire measure of the seismograph could have enhanced information from the satellite images. But, again the "evaluation" would have been useless as it would not be presented to the individuals who would have most benefited from it. Instead, the world who was connected, watched helplessly as the waves crossed the Indian Ocean and inundated the coastal communities, killing hundreds of thousands of people.

Being an evaluator, I take home two critical thoughts from this:
1) Evaluation of systems is very important, in some cases, it just doesn't describe a program, but can in fact save lives.
2) Evaluation is only good if it is placed in the hands of the people who need the information to inform their decision making.

Evaluation can serve as a warning of impending doom for a program.

Granted, the earthquake and subsequent tsunami was a very low risk for the region, even given the fact that the region rests in a very tectonically violent area. The UN has "learned its lesson" and is implementing a system in the region, deploying evaluation after the program of safety for the population in the area failed (reached a very negative outcome). Today, nonprofits as well as for profit businesses, funders, and the like (really all of us) have received tremors that might indicate a larger earthquake that will crush us further. Even given the fragility of economies, knowing where the program is going and what is going well (and conversely, what is not) is critical. In most cases, the programming isn't going to harm individuals and loss of it will not have the same widespread impact as a tsunami, but the metaphor remains. A few key sensors connected to the right stakeholders can forestal major disaster.

As always I'm open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Thursday, July 2, 2009

Evaluation, "Best Practices", and Fears

Those of you following my blog and facebook updates/tweets know that I traveled out to Baltimore a few weeks ago to speak about how our Foundation uses the information we collect as part of evaluation to inform our strategic planning. Recently, I have been in the throws of data analysis of an evaluation we are doing of an initiative that funds organizations with whom our Foundation normally does not partner. These organizations are not traditionally associated with health care delivery, but were viewed by our Board as a possible link to populations normally not reached by our more traditional partners. The evaluation and analysis have gone well and I have some interesting things to share with our staff, Board, and the grantees we serve – but the experience has given me food for thought.

First and foremost, there is a common issue of the organizations collecting data, but not having the resources to aggregate in a meaningful manner. The solution is pretty simple and inexpensive relative to other interventions, for those organizations without it, buy them a spreadsheet program like Excel and/or some data management software like Access. Of course the other key part is to teach them how to use the software. It is very frustrating to an evaluator to discover that the data is all there, ready to be used, but can not be accessed by the people who need to see it, because they just are unable to take the next step and put it in some sort of system that renders it available for analysis.

However, this is not the issue I’m interested in exploring in this post. Rather, this issue is a symptom of another problem – namely the requirement of grantees, by funders, to collect data to report. In many cases, if the grantee were using the data to support their internal operations, they would already have mechanisms to analyze it – even if it is tabulation on a piece of paper. Instead, funders are asking for information that the grantee does not find as valuable to their organization and thus, they collect and collect and then have difficulty in figuring out what to do next.

Part of the underlying issue has to do with the evaluation community that I belong to. Yep, I’m calling us out on this one. If a had a dollar for every time I have heard an evaluation contractor or organization (funders included) try to “sell” a nonprofit on evaluation, the first words out of their mouth is often – “the data you collect will help you solicit funding.” Raise your hand if you have heard or said it. --Incidentally, my hand is raised too. I’m just as guilty.--

We have trained the nonprofit community to believe that evaluation is something that you add on to a program, not something that can be a useful part of a program. But worse, while many funders require an evaluation, they often do not fund it or provide additional capacity building for their grantees. I’m finding that few foundations actually provide workshops, much as less more in depth training for evaluation to their grantees.

But it gets worse – much worse - and from my recent experience, somewhat scary for the future of funding.

There is a growing trend for funders to focus on funding “Best Practices”. At the conference in Baltimore, I often heard that statement tossed around. The staff at my own organization specifically ask for “Best Practice” programming when soliciting proposals for certain initiatives and priority areas. But, here’s the dilemma. If no one is supporting evaluation, how are “Best Practices” identified?

Recently, one of our Board members shared with me a program that is very interesting and judging from the evaluative work they have done so far, very promising. The Board member came to me, looking to see if I knew anyone that funded evaluation. Our Foundation funds programs and the evaluations are in support of those programs. In this case, they were looking to establish the evidence that the program works well to promote it nationally. Beyond a smaller group of funders, you would be hard pressed to find someone who would fund an evaluation on the scale that they are requesting – and that would only be for the programming they are funding. So, the question to me was, “where do we find funding just for evaluation?” And the answer?... I had no clue. Well, I did and I’m forwarding it on to a colleague who will be nameless, as I want to surprise her with it. But, even in this case, her organization funds research, not evaluation, so it probably is a stretch.

The point I’m making is that funders and nonprofits rely upon evaluation, even if they aren’t aware that they are doing so. The “Best Practices” evolve from the results of evaluative work. But if the nonprofits are only doing them to appease the requirements of funding or to solicit more funding, then the quality and the focus of the evaluation often moves away from program improvement/analysis to tracking of operational goals (basic outputs) and a focus on outcomes that are selected either to demonstrate success (as opposed to test success) or may even just be determined by the funder who isn’t the expert in the programming compared to those implementing it.

So, back to the grantees and the programs I’ve been evaluating. First of all, given the trend of funding, such innovative funding might be squelched in the future as they programs do not have “Best Practice” status. Their programs aren’t as effective as they can be as they are not fully aware of their programming. And yes, I must admit, their lack of having the knowledge of using Excel or some other spreadsheet program was a blind spot for me. I’m moving to rectify it for my grantees, but it makes me wonder how many more nonprofits are suffering from the lack of skill and resources to take the hard work they have done in collecting data and making it more useful. How many organizations still tabulate on a piece of paper or at best, use their spreadsheet program for only that purpose? I’m afraid of the answer, as I am afraid of the focus on “Best Practices”.

I have an answer that I’m implementing within the sphere of influence I have – namely mostly with my own grantees.

1) Over a year ago, I stopped “selling” evaluation as a way to get more funding and focus the grantee’s (and my own Foundation’s) attention on program improvement.

2) I had started workshops for my grantees and other nonprofits in the region, to teach evaluation methods.

3) I require that all my grantees that receive funding, beyond basic infrastructure, conduct an internal evaluation.

4) My Foundation funds these evaluations along with measurement supporting additional questions we have of the initiative group as a whole.

But, my experience has also taught me that I need to provide more basic training and support for the grantees in the form of funding for software and training to use it.

So, what can you do? If you are a funder or work for a funder, you might want to take a look at what your policies are around evaluation and how you use it, much as less encourage your grantees to do with it. If you work for a nonprofit, I urge you to think about how you use the measures you collect and why you collect them. But, before you run off to do that (because I know this has inspired you to push away from your computer and get busy), I would greatly like to hear your thoughts and reflections.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Tuesday, June 2, 2009

Impact and Responsibility

In the past post, I wrote about how I thought that not considering impact of a program is irresponsible. Yesterday, I was struck by a fantastic example where consideration of the impact is critical. I haven't posted much about my family (other than a few tweets that you can see on the side of the blog page), but I have a father that is significantly older than me. He lives in an assisted living community which always strikes me of a college dorm with walkers and rascals. In any case, yesterday, in the early afternoon, he was taking some sun in the atrium (he refers to it as his vitamin D treatment). He got up to go back inside and that's where the story gets fuzzy.

At some point, he was found in the atrium, on the ground. A large, bloody bump resided on his skull (bump seems to trivialize the size of the hematoma). He was found to be disoriented and he was unable to get up off the ground. Along with the nausea and short term memory loss, the staff contacted me and emergency services to get him to the emergency room.

This is the background story for my discussion of outcomes...

In the next 9 hours that my father and I hung out together at the hospital, the thought of outcomes barely entered our minds. I should clarify, before I saw my father, outcomes were very strong in my mind, specifically one in particular. After catching up with him in the emergency room and spending a hour or so with him as various scans were done of his body and blood drawn, the "program" of my father's assessment and care was definitely being assessed through the eyes of process versus outcome. The fear was gone and now we both were focused, as several people spending their time in an emergency department, on the process. How long is this taking? Another x-ray? More blood? Can't they find the vein? I could go on. Fortunately, the hospital staff did keep the outcome of the program in mind and steadily worked towards it.

Incidentally, the program got extended beyond 9 hours in an emergency department to an overnight stay in the hospital - for observation. In this case, the tests done by the hospital staff indicated no impact of the impact on the concrete ("road rash" at the point of impact of his skull indicated that he did his concrete with his head), but being responsible evaluators, they chose in conversation with his primary care physician to keep my dad overnight for observation - to see if anything developed from the accident. Impact.

From an evaluator's perspective, I greatly appreciated the program that my dad experienced. From a son's concerned perspective, I did as well. Now, the hospital could have said that they performed their tests and found nothing wrong with my father and discharged him from the emergency department home last night - much like some evaluations that place the sole post-intervention immediately after the intervention or program is concluded. Further, the folks at the residence could have halted the process of assessment when he regained his faculties at the nurse's station. After all, other than the large bump on his head and a few abrasions, he really was "fine" - back to his "normal" self. Or, the nurses at his residence could have simply assumed that he would be ok if he just went back to his room, which is where he was heading in the first place. No one had to do an assessment of whether he was ok (disoriented or not). The his original plan was just to go back to his room - if no assessment had been done of the impact of his fall, we all wouldn't have been the wiser (other than the nasty bloody thing on his head and some additional stiffness in his movements). Instead, the staff of the residence and the hospital did the responsible thing and continued to assess impact.

Now, I recognize that some programs are either so new as to not have something to measure outcome-wise. I also recognize that there is only so far that you can follow an impact before you either loose contact with your participants or the cost of the evaluation far exceeds the benefits to follow up. That being said, more than lip service must be paid to outcomes. At minimum they need to be considered, even in the early stages of program design and implementation (I would argue that is probably the most critical point of consideration), and reasonable measures should be taken to make sure that the outcomes are represented in the evaluation.

Now, I would not be surprised if you have read the above and said something to the effect of "great, so you want me to consider outcomes in my program and evaluation designs. You want me to measure them, if possible, too. So, how much is good enough?"

The answer that question is to be responsible. Recognize what you need to know to effectively know that the program is doing what you want it to do. In the case of my father's head injury, the head containing one of the more critical organs for his survival and quality of life, the staff of the residence, and later the hospital, thought he required more outcomes assessment. However, along the way, there was also evidence of relationships between each activity of the programs administered by the residence and hospital staff and the outcome of my dad surviving and feeling better. They got him off the ground, into a cool setting (it was 90 degrees Fahrenheit and probably much hotter in the atrium). They helped him with his nausea (mostly just providing him a bucket). And they reassured him. Along the way, they conducted their process and outcomes evaluations. But, each activity had a theoretical link to my dad's health and well being. Each assessment grew off the previous one. Looking at the chain of interventions and assessments conducted by the staff of both places, you can see how each one had a decision point as to whether to add the next intervention or not, whether to add the next assessment or not. These linkages are part of their "theory of care" with each linked to that endpoint outcome. And based upon how my dad was doing, they chose to continue down the theory of care with associated evaluations and assessments of my dad. In the end, they took the assessments to where we are today, with him in the hospital under observation. If he had been younger, with less comorbid conditions, he probably would have been discharged last night from the emergency room. If the bump didn't form or if he had not been disoriented, he probably would have hung out in the nursing station for quite a while yesterday and not gone to the emergency room. The point I'm making here is that the evaluation reflected the need of the staff to garner more information to be responsible to the over all outcome of the interventions - my dad surviving with reduced lasting effects from the fall.

So, I would argue that in this case the evaluation has been done appropriately with correct attention to outcomes. Now, when the hospital discharges my dad (hopefully today), I expect that their evaluation of him will also end. There will be the expected handoff to his primary care physician (more than what some folks will get who are uninsured by the way), but the hospital staff will have satisfied themselves that my dad is fine when they discharge him. They will have counted their interventions a success. Now, in this case, my father doesn't have a chronic disease tied to the program, so no longer term tracking is necessary - but if he did, I would expect that his primary care physician would follow up on that and track the issues going forward.

So, the statement I'm making is that the evaluation of outcomes must be rigorous that the program staff and participants can be truly satisfied that the program does what it purports to do. I often see evaluations of educational programming with a pre-test/post-test methodology. On paper, this sounds pretty good. The evaluation focuses often on change in knowledge and sometimes includes change in attitude. However, often the post-test is conducted immediately following the training. I would find this acceptable if the evaluator or program staff could give me a theory of retention or the like that would give us a sense of what this really would indicate for longer term knowledge, behavioral, and attitudinal change. Instead, the post-test is often presented as if it is the ultimate outcome evaluation. To me, this is irresponsible. It would be the same as if the residence staff took a look at my dad after he sat in the nurses station for a while (both to cool off and get his bearings) and seeing that he was fully oriented again, just let him go back to his room, with little concern that being able to tell them who he was and where he was were poor indicators as to any sub-cranial bleeding which would result in other issues later.

Speaking of responsibility - I alluded to the process of my dad's care in the emergency department earlier in this post. We both were very much focused on the process and the discomforts associated with being in an emergency room for 9 hours with associated tests. One of the common process evaluation techniques and for that matter, sometimes pushed at me as an outcome measure (shame on you) is participant satisfaction. I can tell you that my dad was not satisfied with his care at the moment and I was pretty put out that we had to sit and wait and wait and wait in uncomfortable furniture. That would have been a poor measure of the program. I'm not even sure how they could have made the experience more pleasurable (other than more comfortable surroundings and perhaps a clown that did balloon animals), but to be honest, that isn't their core business. Their core business (the program) is to get people triaged, get them the treatment to get them stabilized, and get them the care they need to move them either into the hospital for more care or back out the door. For that matter, lets say that we do receive an assessment of satisfaction when my dad is discharged. Does the fact that my dad's room is rather nice looking, has a couch that I could sleep on, and has more cable channels than either of us do at home really assess the program? Certainly it might be a predictor of whether we return to that hospital for care (an as such is important to the hospital to collect), but it really doesn't determine whether the interventions my dad received at the hospital have an impact on his health - the core of the program.

So, now that I have gone on a very long example to emphasize a point, I'm going to flog that dead horse just a bit more...

All I ask is that program designers, implementers, and evaluators really think about the outcomes of a program and the core of its intent. I ask that they be honest with themselves as to what is important and really measurable. I ask them to draw those direct connections between their resources (inputs), activities, associated outputs, and short, medium, and long term outcomes. At minimum, consider these and then figure out what can really be measured and where you have to, draw upon theory to make the connections to the unmeasurable.

As always, comments are encouraged.

Best regards,

Charles Gasper

Thursday, May 28, 2009

Impact and Evaluation

Perhaps I'm feeling saucy today, but there were two different concepts that I wanted to address and I didn't think that both fit in one blog post. So, dear reader, you are subjected to a Two for Thursday...

So, in the previous post, I mentioned that there is a lot more to be learned from evaluation than just the value of a program. I'm going to change gears a bit here and remind you that I also believe that assessing the impact of a program is very important. It strikes me as a bit reckless to deploy/fund a program/intervention and not observe whether the "thing" done had a positive impact. Yes, I have heard the complaint that it takes quite some time to get to the impact and some change is so incremental as to be difficult to measure, but I find those arguments to be spurious. Yes, some impacts do take time and some interventions are a "drop in the bucket", but there are always shorter term changes that theoretically (yes, back to program theory) are linked to change. Think of them as the stepping stones that lead to the great change. By choosing not to look at some sort of outcome, the evaluator and more importantly, the individuals funding and deploying the program or intervention are being irresponsible.

I'll be writing more on this topic in the next few days and weeks as I struggle with the various arguments that have been made against evaluation of impacts. As always, I welcome comments and questions - especially if you are of the ilk that finds evaluation of impact to be impractical, impossible, and whatever other "im" you can come up with.

Best regards,

Charles Gasper

Evaluation & Strategic Learning

In a couple of weeks, I will be presenting to a group of health focused funders on the benefits of evaluation and research. Specifically, I'm talking about how our Foundation takes the information we gather from evaluation and how we use it to inform our future grantmaking. If you are interested - here is the website for the event - http://www.gih.org/calendar_url2665/calendar_url_show.htm?doc_id=706939. The title of my presentation is Strategic Learning: Integrating Information and Assessing Your Foundation's Impact and while the presentation will focus on the various avenues we are exploring for impact assessment, there are other important bits of information we have gleaned that have dramatically impacted our grantmaking.

Most people think of evaluation in the framework of value. I mentioned that in an earlier post - if you haven't read it, I find it riveting. [What? I can't be sarcastic in my posts?] But, the statement in this case really does hold. Travel with an evaluator some time on a site visit. A significant portion of the first meeting with a team you are "evaluating" is spent on explaining that "this isn't an audit" and that "we aren't here to play "gotcha"". Like it or not, the term evaluation has become synonymous with the favorite activities of the IRS. [Note to anyone affiliated with the IRS, I really like you guys, you do good work, I'm just talking about your reputation, not my own personal feelings. No need to audit me.] Members of the evaluation community actually have discussed using a different word when bringing up the topic of evaluation and there are a few folks who do. Because of this, the general public doesn't quite understand that there are many more benefits to evaluation other than discovering whether a program or process actually worked. The process of evaluation allows those interested to learn a whole lot more about what is being evaluated than just that. For example...

There is some literature out in the world that suggests that having a network of resources and using it is a strong indicator of the probability of an organization and its programs being sustained for a longer period of time. Curiously, it appears that the network is actually more important than funding. I think this is more of a notion that the two are somewhat interdependent - having a larger network probably is predictive of more opportunities to find funding. But, I didn't start writing this example just to explore that argument further with you - we are actually doing some research with our past grantees to see what bubbles to the top for prediction of sustainability. But, for now, it seems that networks are very important. The reason I can say that is that some of our evaluation work has looked at sustainability of programs (we are a "seed" funder, funding new or expansions of current programs, not a sustaining funder). We want the programs we fund to be sustained past our funding cycle and have oriented some of our evaluation work to look at how this works. Well, some of the interesting information was that networks are important (did you notice that I keep driving this point home), but depending on whether you are a rural or urban nonprofit, the relative importance changes - other things bubble up to the top. It is this discovery that led us to the research on prediction of sustainability. The eventual idea is that our program staff will have a checklist to support their review of grant applications, allowing them to assess the probability of the organization and their programs being sustained after funding. How we use the tool may vary, however the initial intent is to identify issues with a possible grantee from the very beginning and to start providing them with support to help them grow in those areas of weakness.

The concept of networks also resides in some other evaluation work we are conducting. One of our initiatives focuses on funding "non-traditional" programs - basically funding organizations that normally would not be working with us. A large percentage of these organizations are faith based and/or are small community organizations. Given that we recognize that resource networks are important, we wanted to know how these organizations work. Well, what we have discovered is that these small community organizations excel at developing resource networks. What seems to be a difficult concept for our more traditional grantees to work with, these small community organizations live and breathe based upon their resource network. Teasing out how their networks develop and why is a huge "learning" that we can share with our traditionally funded grantees. The information we are gathering has nothing to do with whether their program is "successful", but rather is giving us information as to how they do their business and are successful in areas that seem to be a struggle for others.

The point I'm making here is that we have learned some impact unrelated information that will and is changing how we conduct grantmaking. This and other nuggets of information have changed what support we provide and how we provide it. Mind you, we are interested in impact and I'll address my thoughts on that in another post, but I thought it a good time to remind all of us that there is more to evaluation and strategic learning other than whether our funding "did good work".

As always, I'm open to your comments.

Best regards,

Charles Gasper

Friday, May 8, 2009

Things!

What's a thing? Merriam-Webster's definition can be found here - http://www.merriam-webster.com/dictionary/thing. But to someone who has learned to write clearly, to use the word "thing" is to cheat - to say something when you can't really say anything. In high school and college, professors drove into my brain that the use of the word is unacceptable. To this day, I find myself still using the word when I can't think of another word that should be used. As an example, I was recently asked, "what is an activity and output?" "Well," I answered, "they are things that..." Now, I continued with my explanation to further state what activities and outputs are, but I used the dreaded word.

Now, why am I talking about the word thing? It is at most a lazy use of English to most people, but to an evaluator, it is much more insidious. In the past few weeks, I've had opportunity to review applications for funding, have had discussions with applicants, and have met with a few of our grantees. In many cases, when we discussed their program, there was sort of a placeholder - a thing that was not fleshed out in the development stage and now was sticking out like a sore thumb. In some cases, it was an assumption that something would happen. In other cases, it was just something that wasn't considered. In every case, it was a barrier to the success or effectiveness of the program.

As you know, I'm a bit of a fan of program theory focused evaluation and the construction of theory based logic models of programs or projects. The process forces the designer to think about the things in their models and at minimum, identify them. A well used model will provide the designer with an opportunity to think more about the thing and objectify it (make it real versus an abstract idea). Finally, once in place, it helps the designer with the next critical question - how do I measure my program?

This is a very common question that I hear from the nonprofits I work with. How do they measure their processes, their outputs, their outcomes. The first question I always ask is - "what are they?" The answer I often get is that they had this idea with a bunch of things. And so, I often ask them to draft out their model. Once the model is started and they start defining the things and what they are connected to, often the issue resolves itself and it becomes a question of not "how do I measure my program", but rather, what are the techniques and tools I can to measure this specific activity, output, or outcome? The underlying question changes from "what really is my program?" to "what are the most efficient and effective ways to measure the parts of my program?" It is that point that evaluation starts to make sense to the designer and the information that is needed to support the program is clarified.

So - my advice to nonprofits who are designing and/or are running programs and now have a need for evaluation is to engage in a form of program theory modeling as you develop the program to ensure that you really understand what it is you are trying to accomplish. For evaluators, the model will prove invaluable in shaping the evaluation work in which you engage.

As always, I'm open to comments - please feel free to share your own thoughts on this.

Best regards,

Charles Gasper

Wednesday, April 29, 2009

An Argument for GIS and Evaluation

Before you comment about the fact that I haven't written for a while, please understand that I'm doing this is a side gig to my day job, being a husband, and father. That being said, I had hoped to write on a weekly basis. It looks like the pace will be more along the lines of every two weeks.

Now that we've got that covered, we can move to the topic of today's blog. While I had originally planned to write my thoughts on evaluation of really complex systems (social determinants of health), a little thing came up - you may have heard of it - Swine Flu.

Now some people might be scratching their heads at this point wondering, what does Swine Flu have to do with evaluation? Well, our colleagues in the world of epidemiology are engaging in a very large evaluation of the origins and spread of the virus. It is interesting from a methodological standpoint in that they are doing both a retrospective and formative evaluation of the disease and interventions. Many tools are brought to bare - one of which that I find very useful, GIS mapping. Granted, most of the public websites are pretty rudimentary, but in aggregate, they contain useful information, that can be further "mashed up" to tell an interesting story of trends. In some cases, they report the geocoded location of a possible or confirmed case and the status of the case (e.g. patient mortality). On other maps, the counts by region are all that is offered. What all these maps provide though is a picture of the spread of the illness, possible indicators of the virulence and impact of interventions (the maps are reporting the outcomes of the intersection of both). Those applying interventions, who have additional data, not reported in the metadata attached to each point, have a better idea as to whether the interventions are working or not. This informs them as to what options are available for the future.

From a public standpoint, I had a conversation with someone here in St. Louis yesterday. The question on both of our lips was what should we be doing at this point. Should we start reducing the amount of time we have in the public? Should we send our children to childcare? What sorts of precautions should we take and to what extreme? The CDC provides general guidelines for reducing the risk of transmittion of the flu - http://www.cdc.gov/swineflu/ along with cautions about areas of high concentration of virulent cases (identified by higher mortality rates). Frankly, I also took a look at the GIS maps and realized that while we are at some level of risk due to air traffic and the timing of spring break for the high schools and colleges in the area, but we have no confirmed cases as yet and the level of spread would indicate that things are pretty quiet now. However, I am watching the trends to see if the infection rates change. I also realize that this is the start of this and that there will be the dormant summer before this really hits hard this fall. The case for prevention and treatment (and subsequent evaluation) will be much more complex (and as such, very interesting).

Here are links to the maps I've been following:
http://maps.google.com/maps/ms?ie=UTF8&hl=en&t=p&msa=0&msid=106484775090296685271.0004681a37b713f6b5950&ll=32.650649,-116.139221&spn=2.062781,3.99353&z=8
Prettier version: http://maps.google.com/maps/ms?hl=en&ie=UTF8&msa=0&msid=106484775090296685271.0004681a37b713f6b5950&t=h&ll=35.46067,-94.746094&spn=66.482866,112.5&z=3&source=embed
More "qualitative information": http://healthmap.org/swineflu
Timeline example - isn't accurate as datestamps for the some of the data is incorrect, but gives you a sense of what can be done to view spread over time: http://homepage.ntlworld.com/keir.clarke/flu.htm
My favorite static version: http://maps.google.co.uk/maps/ms?hl=en&ie=UTF8&msa=0&msid=109496610648025582911.0004686892fbefe515012&ll=26.115986,-9.140625&spn=144.408712,283.007813&t=p&z=2
An interesting map regarding travel to Mexico: http://maker.geocommons.com/maps/4884

The last map is interesting if only because it seems to reflect the case distribution of Swine Flu in the US. In other words, there might be a correlation between level of travel to Mexico and incidence rates of Swine Flu. I state might be, as I haven't crunched the numbers, nor know of a study as yet. If someone has, it would be great to see. However, it appears that what we see right now is the "first wave" of cases that were contracted in Mexico. Their impact on the regions they were identified in (in Missouri's case, right now 12 unconfirmed cases) will show up in a few days if the virus is maintaining its level of virulence and the preventative measures have not either been implemented and/or are ineffective. In any case, it will be the evaluation process and the GIS maps that will best inform the public health officials as to what is happening and allow them to better assess what is working or not.

Best regards,

Charles Gasper

Tuesday, March 24, 2009

So You Thought You Made an Impact

A great deal of conflict surrounds what constitutes an evaluation. As posted above, there are some folks in the evaluation world that do not believe an evaluation is conducted unless there has been some assessment of value. Value is often transcribed by stakeholders to mean "impact". So, what is this "impact" and how does one assess it. To begin, I think we need to look at the reasons for the program's creation - in other words, why did the program get started in the first place? Most programs, much like small businesses are created based off the idea of someone who thinks there is a "market" for their product. Depending upon how invested the individual is with the idea and/or resources available to that person, some research goes into how to best conduct the program and whether the market is really large enough to support the new endeavor. Be that as is may, the "impact" of the program is considered at some level. It is here that the value of the program is at least considered by the initiator.

So, why do I mention impact? A quick look at the Dow Jones Industrial Average or for that matter, just about any other index and you can get a sense that people are a bit nervous about money. Funders, be they government, private individuals, or foundations are becoming a bit more selective as to who and what they are funding. They are no longer content with the vague statements of "people learned something", "the participants enjoyed the experience", etc. Rather, they are starting to ask the question - "did the money really result in change or at least forestall things getting worse?" It is within this context that nonprofits now promote their work.

However, there is more to the story...

While I work for a funder and answer to a Board of Directors that is concerned that their investments have an impact - there is also a notion that good business sense would indicate that any viable organization has a strategic plan and their activities support that plan. Imbedded in the concept of the strategic plan are goals and statements of impact. As such, any organization that implements something new, or is maintaining some aspect of their programming should have linkages between their actions and the global impacts their organization is trying to effect.

And so we get to the relationship of value and evaluation. In my case, I do not think it is always useful and/or appropriate to attempt to measure impact as part of an evaluation - but I do believe that evaluation should recognize that all programming should have some sort of impact. If a program is in its early stages, the data might not be there yet to see change, but it might be there for a pre-test. Further, since we are on the topic of value, the impact of the program might not be the focus of the major stakeholders at the time - but it should be acknowledged if only to inform the process evaluation (description of the context of the program and rationale).

But, before I push further and make the argument that evaluation doesn't exist if there isn't acknowledgement of planned impact or value, I need to clarify a specific distinction. Namely, there is a difference between the methodology of an evaluation and the purpose of an evaluation. Certainly, they are linked - but often times I hear the statement - "we can't consider impact of a program because it won't really happen until well past the funding ends". This statement is not focused on the methodology of the evaluation, not the purpose. It is an excuse statement that attempts to take any consideration of impact off the table and focus everyone on only the actions of the program. Again, it is important to assess the process of a program, but without understanding the end (impact), a significant portion of the picture, that should inform the process evaluation is lost. We can speak to the number of hours of training we give to septagarians and their experience with a program, but if we are attempting to change attitudes towards violence, unless their voice is really respected in their communities, odds are we are barking up the wrong tree. Now, hopefully, the training is actually deployed to individuals of all ages and the septagarians are just part of the mix, but if the methodology of the evaluation is not informed by the purpose of the program (and its goal impact), the training evaluation focus might change from sampling the elderly to sampling younger individuals. Further, in the case of social marketing, what speaks to the elderly might not reach the 20 somethings. And so, the data we gather in the methodology of the evaluation that may indicate that the 70 somethings are really enjoying and learning from the training, might not be indicative how the rest of the "target audience" feels or learns.

So, I pose the argument that value and consideration of impact is important - whether it is really measured or not.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper
The Evaluation Evangelist

Monday, March 16, 2009

Introductions

Good day to you,

Having found this Blog, you came here for one of a few reasons:
  1. You are interested in evaluation and the idea of an Evaluation Evangelist excited you.
  2. You met me or one of the group excited by evaluation and heard about the site and thought you might take a look.
  3. You followed the link from the Missouri Foundation for Health's website to my Blog.
  4. Google hiccuped and your search brought you here versus whatever you were searching for.

Hopefully, having found this, you will find some information of interest. As this is the first post on this Blog, I'm not sure what fruit this experiment will bear, but expect this to be worth both our whiles.

With the caveats above, let's delve into this further shall we?

What is an Evaluation Evangelist? To answer that, we need to discuss what is Evaluation and what is an Evangelist. There are long dissertations tied to both terms and at some point, I'll probably edit this posting to incorporate links to good descriptors, but for now let's go with a working definition.

Evangelist - Two "definitions" I like can be found here - http://en.wikipedia.org/wiki/Evangelist: Namely, in the religious connotation - a Christian [in my case an Evaluator] that explains his or her beliefs to a non-Christian [or in my case folks not yet versed in Evaluation] and thereby participates in Evangelism [attempting to convert people]. The second definition secularizes the "role" of an Evangelist by describing one as a person who enthusiastically promotes or supports something. Apple (http://apple.com) has had a group of individuals who held the actual title. In my case, the title is a commentary bestowed upon me by co-workers and others in the Evaluation field.

Speaking of Evaluation - There are probably a million different definitions for the term. However, the best version I like is something to the effect of; a systematic method of assessing something. Beyond that, there are discussions about the notion of whether an evaluation exists if there is an assessment of value (focus on outcomes of the project), what minimum point of rigor in methodology constitutes an evaluation, who needs to be involved in the process, and on and on and on. In reality, it seems that globally, we all have different viewpoints on what constitutes evaluation and frankly, I do not consider myself a part of any real distinctive camp - except one... I believe that an evaluation should be "theory-driven". In other words, those things we do are intended in some manner to affect an outcome. Put more simply, we do stuff because we want something to happen. When I turn the key of the starter of my car, I'm doing so because I want the car to start. There is a direct and perhaps a series of indirect outcomes of my action. But, I did what I did for a reason. Theory-driven evaluation is the idea that the actors, the evaluator, and all other stakeholders (people interested and invested in the project or action) are expecting something to come of the action and are looking for the "impact" of the action. Now, there are times that we do not measure the "impact" - usually because it is expected to occur much later. Other times, a description of the actions are important to all interested parties. But, my focus is on the idea that while we might only focus measurement on the actions and their produced outputs (products), we need to recognize the connection with the eventual outcomes (impacts). Using my example of turning the key - I can describe the amount of torque I use to turn the key, or for that matter, the process of finding and inserting the key... From a car design perspective, these are important things to measure - but the reason the key slot is there and the starter mechanism is to get that car started.

I'll speak more on the topic in a later post. In the mean time, John Gargani has an interesting commentary on the impact of a less than systematic assessment and how theory needs to play a role in some manner - http://gcoinc.wordpress.com/2009/03/13/data-free-evaluation/#more-161.

Returning to a little about me - my true role and title is Director of Evaluation for the Missouri Foundation for Health (http://mffh.org). The Foundation funds programs and projects focused on improving the health of Missouri's uninsured, underinsured, and underserved. I have oversight for evaluation for the Foundation, which means that I shape policy around evaluation as well as work with our contracted evaluators. I also have the occasional opportunity to conduct my own evaluations.

In the next series of posts, you will learn more about the policies we have enacted and how we have implemented them both within the Foundation and in support of the nonprofit community in Missouri.

Additionally, as you follow this Blog or catch a Twitter or Facebook comment, it is my fervent hope that you grow to love Evaluation as much as I do.

Best regards,

Charles Gasper