Tuesday, June 2, 2009

Impact and Responsibility

In the past post, I wrote about how I thought that not considering impact of a program is irresponsible. Yesterday, I was struck by a fantastic example where consideration of the impact is critical. I haven't posted much about my family (other than a few tweets that you can see on the side of the blog page), but I have a father that is significantly older than me. He lives in an assisted living community which always strikes me of a college dorm with walkers and rascals. In any case, yesterday, in the early afternoon, he was taking some sun in the atrium (he refers to it as his vitamin D treatment). He got up to go back inside and that's where the story gets fuzzy.

At some point, he was found in the atrium, on the ground. A large, bloody bump resided on his skull (bump seems to trivialize the size of the hematoma). He was found to be disoriented and he was unable to get up off the ground. Along with the nausea and short term memory loss, the staff contacted me and emergency services to get him to the emergency room.

This is the background story for my discussion of outcomes...

In the next 9 hours that my father and I hung out together at the hospital, the thought of outcomes barely entered our minds. I should clarify, before I saw my father, outcomes were very strong in my mind, specifically one in particular. After catching up with him in the emergency room and spending a hour or so with him as various scans were done of his body and blood drawn, the "program" of my father's assessment and care was definitely being assessed through the eyes of process versus outcome. The fear was gone and now we both were focused, as several people spending their time in an emergency department, on the process. How long is this taking? Another x-ray? More blood? Can't they find the vein? I could go on. Fortunately, the hospital staff did keep the outcome of the program in mind and steadily worked towards it.

Incidentally, the program got extended beyond 9 hours in an emergency department to an overnight stay in the hospital - for observation. In this case, the tests done by the hospital staff indicated no impact of the impact on the concrete ("road rash" at the point of impact of his skull indicated that he did his concrete with his head), but being responsible evaluators, they chose in conversation with his primary care physician to keep my dad overnight for observation - to see if anything developed from the accident. Impact.

From an evaluator's perspective, I greatly appreciated the program that my dad experienced. From a son's concerned perspective, I did as well. Now, the hospital could have said that they performed their tests and found nothing wrong with my father and discharged him from the emergency department home last night - much like some evaluations that place the sole post-intervention immediately after the intervention or program is concluded. Further, the folks at the residence could have halted the process of assessment when he regained his faculties at the nurse's station. After all, other than the large bump on his head and a few abrasions, he really was "fine" - back to his "normal" self. Or, the nurses at his residence could have simply assumed that he would be ok if he just went back to his room, which is where he was heading in the first place. No one had to do an assessment of whether he was ok (disoriented or not). The his original plan was just to go back to his room - if no assessment had been done of the impact of his fall, we all wouldn't have been the wiser (other than the nasty bloody thing on his head and some additional stiffness in his movements). Instead, the staff of the residence and the hospital did the responsible thing and continued to assess impact.

Now, I recognize that some programs are either so new as to not have something to measure outcome-wise. I also recognize that there is only so far that you can follow an impact before you either loose contact with your participants or the cost of the evaluation far exceeds the benefits to follow up. That being said, more than lip service must be paid to outcomes. At minimum they need to be considered, even in the early stages of program design and implementation (I would argue that is probably the most critical point of consideration), and reasonable measures should be taken to make sure that the outcomes are represented in the evaluation.

Now, I would not be surprised if you have read the above and said something to the effect of "great, so you want me to consider outcomes in my program and evaluation designs. You want me to measure them, if possible, too. So, how much is good enough?"

The answer that question is to be responsible. Recognize what you need to know to effectively know that the program is doing what you want it to do. In the case of my father's head injury, the head containing one of the more critical organs for his survival and quality of life, the staff of the residence, and later the hospital, thought he required more outcomes assessment. However, along the way, there was also evidence of relationships between each activity of the programs administered by the residence and hospital staff and the outcome of my dad surviving and feeling better. They got him off the ground, into a cool setting (it was 90 degrees Fahrenheit and probably much hotter in the atrium). They helped him with his nausea (mostly just providing him a bucket). And they reassured him. Along the way, they conducted their process and outcomes evaluations. But, each activity had a theoretical link to my dad's health and well being. Each assessment grew off the previous one. Looking at the chain of interventions and assessments conducted by the staff of both places, you can see how each one had a decision point as to whether to add the next intervention or not, whether to add the next assessment or not. These linkages are part of their "theory of care" with each linked to that endpoint outcome. And based upon how my dad was doing, they chose to continue down the theory of care with associated evaluations and assessments of my dad. In the end, they took the assessments to where we are today, with him in the hospital under observation. If he had been younger, with less comorbid conditions, he probably would have been discharged last night from the emergency room. If the bump didn't form or if he had not been disoriented, he probably would have hung out in the nursing station for quite a while yesterday and not gone to the emergency room. The point I'm making here is that the evaluation reflected the need of the staff to garner more information to be responsible to the over all outcome of the interventions - my dad surviving with reduced lasting effects from the fall.

So, I would argue that in this case the evaluation has been done appropriately with correct attention to outcomes. Now, when the hospital discharges my dad (hopefully today), I expect that their evaluation of him will also end. There will be the expected handoff to his primary care physician (more than what some folks will get who are uninsured by the way), but the hospital staff will have satisfied themselves that my dad is fine when they discharge him. They will have counted their interventions a success. Now, in this case, my father doesn't have a chronic disease tied to the program, so no longer term tracking is necessary - but if he did, I would expect that his primary care physician would follow up on that and track the issues going forward.

So, the statement I'm making is that the evaluation of outcomes must be rigorous that the program staff and participants can be truly satisfied that the program does what it purports to do. I often see evaluations of educational programming with a pre-test/post-test methodology. On paper, this sounds pretty good. The evaluation focuses often on change in knowledge and sometimes includes change in attitude. However, often the post-test is conducted immediately following the training. I would find this acceptable if the evaluator or program staff could give me a theory of retention or the like that would give us a sense of what this really would indicate for longer term knowledge, behavioral, and attitudinal change. Instead, the post-test is often presented as if it is the ultimate outcome evaluation. To me, this is irresponsible. It would be the same as if the residence staff took a look at my dad after he sat in the nurses station for a while (both to cool off and get his bearings) and seeing that he was fully oriented again, just let him go back to his room, with little concern that being able to tell them who he was and where he was were poor indicators as to any sub-cranial bleeding which would result in other issues later.

Speaking of responsibility - I alluded to the process of my dad's care in the emergency department earlier in this post. We both were very much focused on the process and the discomforts associated with being in an emergency room for 9 hours with associated tests. One of the common process evaluation techniques and for that matter, sometimes pushed at me as an outcome measure (shame on you) is participant satisfaction. I can tell you that my dad was not satisfied with his care at the moment and I was pretty put out that we had to sit and wait and wait and wait in uncomfortable furniture. That would have been a poor measure of the program. I'm not even sure how they could have made the experience more pleasurable (other than more comfortable surroundings and perhaps a clown that did balloon animals), but to be honest, that isn't their core business. Their core business (the program) is to get people triaged, get them the treatment to get them stabilized, and get them the care they need to move them either into the hospital for more care or back out the door. For that matter, lets say that we do receive an assessment of satisfaction when my dad is discharged. Does the fact that my dad's room is rather nice looking, has a couch that I could sleep on, and has more cable channels than either of us do at home really assess the program? Certainly it might be a predictor of whether we return to that hospital for care (an as such is important to the hospital to collect), but it really doesn't determine whether the interventions my dad received at the hospital have an impact on his health - the core of the program.

So, now that I have gone on a very long example to emphasize a point, I'm going to flog that dead horse just a bit more...

All I ask is that program designers, implementers, and evaluators really think about the outcomes of a program and the core of its intent. I ask that they be honest with themselves as to what is important and really measurable. I ask them to draw those direct connections between their resources (inputs), activities, associated outputs, and short, medium, and long term outcomes. At minimum, consider these and then figure out what can really be measured and where you have to, draw upon theory to make the connections to the unmeasurable.

As always, comments are encouraged.

Best regards,

Charles Gasper

1 comment:

  1. Thanks for sharing informative and useful your blog is unique. thanks!

    ReplyDelete