Wednesday, July 29, 2009

Evaluation & Dissemination Can Save a Life

Warning, the following post contains images and discussion tied to the 2004 Indonesian Tsunami.

At 00:58:53 UTC, December 26, 2004, a subduction created 9.1 magnitude earthquake struck off the West Coast of Sumatra. The event was recorded on various seismographs throughout the world and the Pacific Tsunami Warning Center issued an bulletin at 01:14 UTC, December 26, 2004 - http://www.prh.noaa.gov/ptwc/messages/pacific/2004/pacific.2004.12.26.011419.txt. The bulletin was followed at 02:04 UTC, December 26, 2004 - http://www.prh.noaa.gov/ptwc/messages/pacific/2004/pacific.2004.12.26.020428.txt upgrading the magnitude estimate of the earthquake from 8.0 to 8.5. Later, the same center issued subsequent bulletins (December 27, 2004) for the Pacific Basin updating shore areas as to upwards of a half meter's difference in sea level from crest to trough.

The Indonesian Tsunami event resulted in an estimated 230,000 casualties and is an example where a form of evaluation, in this case observational assessment, could have saved lives.

In 1949, the Pacific Tsunami Warning Center was established as a reaction to the 1946 Aleutian Island Tsunami that was tied to 165 deaths between Alaska and Hawaii. The system is based initially on seismic data, which was all that was available to the center at the time of the 2004 Tsunami. No similar center existed in the Indian Ocean with the closest center being in Japan. However, the initial magnitude of the quake did indicate to some individuals that a significant event would occur in the Indian Ocean.

At this point this blog will move from fact to a bit of conjecture as I am unable to find actual evidence that the following is true, but certainly seems plausible.

Apparently, NOAA (National Oceanographic and Atmospheric Administration), a United States Department of Commerce agency attempted to contact governmental officials for countries which border the Indian Ocean to provide warning. Further, as the Tsunami made its way across the ocean, the warning was passed from government to government. Unfortunately, a method to effectively and efficiently disseminate this critical information was not in place and hundreds of thousands of people were caught unaware, with the first warning either being the serious reduction in sea level due to the preceding trough or the froth of the crest on the horizon.

Be that as it may, there are two major facts of this tragedy:
1) Incomplete evaluation systems were available to assess the issue (NOAA knew there was a possibility for a Tsunami, but had no idea as to intensity or direction).
2) There was no dissemination plan for the emergency.

The United Nations has taken steps to ensure that the "evaluation" of sea level be implemented consistently. There are plans for similar systems to be set up in the Mediterranean Sea, North Eastern Atlantic, and the Caribbean Sea. The expectation is that major communities can be given warnings that will allow for timely evacuations. The issue still is dissemination and it remains to be seen how effective the program will be.

There is an even more striking "measure" that wasn't discovered till recently, that jarred my thoughts around evaluation. The Centre for Remote Imaging, Sensing and Processing recently made available images that they unearthed when reviewing satellite imagery on the date of the Tsunami. As can be seen in the attached images (http://www.crisp.nus.edu.sg/tsunami/waves_20041226/waves_zoom_in_b.jpg , http://www.crisp.nus.edu.sg/tsunami/waves_20041226/waves_zoom_in_c.jpg), the waves from the Tsunami were clearly visible - giving a sense of intensity and direction of propagation.

34162CFC-F090-49D1-A881-AF129FA7B909.jpg
504548DD-BC86-41F0-A47F-85265D81403F.jpg


The system, which had the trip wire measure of the seismograph could have enhanced information from the satellite images. But, again the "evaluation" would have been useless as it would not be presented to the individuals who would have most benefited from it. Instead, the world who was connected, watched helplessly as the waves crossed the Indian Ocean and inundated the coastal communities, killing hundreds of thousands of people.

Being an evaluator, I take home two critical thoughts from this:
1) Evaluation of systems is very important, in some cases, it just doesn't describe a program, but can in fact save lives.
2) Evaluation is only good if it is placed in the hands of the people who need the information to inform their decision making.

Evaluation can serve as a warning of impending doom for a program.

Granted, the earthquake and subsequent tsunami was a very low risk for the region, even given the fact that the region rests in a very tectonically violent area. The UN has "learned its lesson" and is implementing a system in the region, deploying evaluation after the program of safety for the population in the area failed (reached a very negative outcome). Today, nonprofits as well as for profit businesses, funders, and the like (really all of us) have received tremors that might indicate a larger earthquake that will crush us further. Even given the fragility of economies, knowing where the program is going and what is going well (and conversely, what is not) is critical. In most cases, the programming isn't going to harm individuals and loss of it will not have the same widespread impact as a tsunami, but the metaphor remains. A few key sensors connected to the right stakeholders can forestal major disaster.

As always I'm open to your comments, suggestions, and questions. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist

Thursday, July 2, 2009

Evaluation, "Best Practices", and Fears

Those of you following my blog and facebook updates/tweets know that I traveled out to Baltimore a few weeks ago to speak about how our Foundation uses the information we collect as part of evaluation to inform our strategic planning. Recently, I have been in the throws of data analysis of an evaluation we are doing of an initiative that funds organizations with whom our Foundation normally does not partner. These organizations are not traditionally associated with health care delivery, but were viewed by our Board as a possible link to populations normally not reached by our more traditional partners. The evaluation and analysis have gone well and I have some interesting things to share with our staff, Board, and the grantees we serve – but the experience has given me food for thought.

First and foremost, there is a common issue of the organizations collecting data, but not having the resources to aggregate in a meaningful manner. The solution is pretty simple and inexpensive relative to other interventions, for those organizations without it, buy them a spreadsheet program like Excel and/or some data management software like Access. Of course the other key part is to teach them how to use the software. It is very frustrating to an evaluator to discover that the data is all there, ready to be used, but can not be accessed by the people who need to see it, because they just are unable to take the next step and put it in some sort of system that renders it available for analysis.

However, this is not the issue I’m interested in exploring in this post. Rather, this issue is a symptom of another problem – namely the requirement of grantees, by funders, to collect data to report. In many cases, if the grantee were using the data to support their internal operations, they would already have mechanisms to analyze it – even if it is tabulation on a piece of paper. Instead, funders are asking for information that the grantee does not find as valuable to their organization and thus, they collect and collect and then have difficulty in figuring out what to do next.

Part of the underlying issue has to do with the evaluation community that I belong to. Yep, I’m calling us out on this one. If a had a dollar for every time I have heard an evaluation contractor or organization (funders included) try to “sell” a nonprofit on evaluation, the first words out of their mouth is often – “the data you collect will help you solicit funding.” Raise your hand if you have heard or said it. --Incidentally, my hand is raised too. I’m just as guilty.--

We have trained the nonprofit community to believe that evaluation is something that you add on to a program, not something that can be a useful part of a program. But worse, while many funders require an evaluation, they often do not fund it or provide additional capacity building for their grantees. I’m finding that few foundations actually provide workshops, much as less more in depth training for evaluation to their grantees.

But it gets worse – much worse - and from my recent experience, somewhat scary for the future of funding.

There is a growing trend for funders to focus on funding “Best Practices”. At the conference in Baltimore, I often heard that statement tossed around. The staff at my own organization specifically ask for “Best Practice” programming when soliciting proposals for certain initiatives and priority areas. But, here’s the dilemma. If no one is supporting evaluation, how are “Best Practices” identified?

Recently, one of our Board members shared with me a program that is very interesting and judging from the evaluative work they have done so far, very promising. The Board member came to me, looking to see if I knew anyone that funded evaluation. Our Foundation funds programs and the evaluations are in support of those programs. In this case, they were looking to establish the evidence that the program works well to promote it nationally. Beyond a smaller group of funders, you would be hard pressed to find someone who would fund an evaluation on the scale that they are requesting – and that would only be for the programming they are funding. So, the question to me was, “where do we find funding just for evaluation?” And the answer?... I had no clue. Well, I did and I’m forwarding it on to a colleague who will be nameless, as I want to surprise her with it. But, even in this case, her organization funds research, not evaluation, so it probably is a stretch.

The point I’m making is that funders and nonprofits rely upon evaluation, even if they aren’t aware that they are doing so. The “Best Practices” evolve from the results of evaluative work. But if the nonprofits are only doing them to appease the requirements of funding or to solicit more funding, then the quality and the focus of the evaluation often moves away from program improvement/analysis to tracking of operational goals (basic outputs) and a focus on outcomes that are selected either to demonstrate success (as opposed to test success) or may even just be determined by the funder who isn’t the expert in the programming compared to those implementing it.

So, back to the grantees and the programs I’ve been evaluating. First of all, given the trend of funding, such innovative funding might be squelched in the future as they programs do not have “Best Practice” status. Their programs aren’t as effective as they can be as they are not fully aware of their programming. And yes, I must admit, their lack of having the knowledge of using Excel or some other spreadsheet program was a blind spot for me. I’m moving to rectify it for my grantees, but it makes me wonder how many more nonprofits are suffering from the lack of skill and resources to take the hard work they have done in collecting data and making it more useful. How many organizations still tabulate on a piece of paper or at best, use their spreadsheet program for only that purpose? I’m afraid of the answer, as I am afraid of the focus on “Best Practices”.

I have an answer that I’m implementing within the sphere of influence I have – namely mostly with my own grantees.

1) Over a year ago, I stopped “selling” evaluation as a way to get more funding and focus the grantee’s (and my own Foundation’s) attention on program improvement.

2) I had started workshops for my grantees and other nonprofits in the region, to teach evaluation methods.

3) I require that all my grantees that receive funding, beyond basic infrastructure, conduct an internal evaluation.

4) My Foundation funds these evaluations along with measurement supporting additional questions we have of the initiative group as a whole.

But, my experience has also taught me that I need to provide more basic training and support for the grantees in the form of funding for software and training to use it.

So, what can you do? If you are a funder or work for a funder, you might want to take a look at what your policies are around evaluation and how you use it, much as less encourage your grantees to do with it. If you work for a nonprofit, I urge you to think about how you use the measures you collect and why you collect them. But, before you run off to do that (because I know this has inspired you to push away from your computer and get busy), I would greatly like to hear your thoughts and reflections.

As always, I look forward to any comments or questions you might have. Please feel free to post comments.

Best regards,

Charles Gasper

The Evaluation Evangelist