Let’s just get it out of the way - March in the United States is about basketball, college basketball to be exact. The colleges have been playing since November, but it is March where the general public (people who normally don’t follow collegiate basketball) suddenly get interested and involved in filling out “brackets”. If you haven’t seen one of these - here is mine...
So what is the relationship between the assignment of schools to the brackets and evaluation? Well, that isn’t such a simple explanation. What you see are my selections, but the story isn’t about what I wound up selecting, but rather the process I took in selecting them and the manner in which others make their own choices. Let’s be honest, one of the major reasons we engage in evaluative work is to predict how something/someone will perform in the future. In the case of selecting which schools are going to advance through the tournament, there is a good amount of evaluative data on the page for you to see.
The schools are ranked going into the tournament.
Their individual win/loss records are presented.
The location the games are being played are also presented.
Finally, who they are scheduled to play are part of the brackets.
While the analysts and bookies that spend significantly more time on this than me bring additional evaluative data to bear, we do have some interesting data which speaks to how we use evaluative data to make decisions. Let’s break this down by the manner in which I looked at information and came to the startling conclusion that UCLA will win this year’s NCAA tournament.
One of things you will note, if you scour over my selections is that the higher ranked teams tend to be the ones I picked. They also tended to have a higher win/loss ratio. This basal information explains some of my decision process. However, context of the game also plays a factor. Much is made of the impact crowds have over games. While a different sport, the Seattle Seahawks in professional American football and Texas A&M’s collegiate football team promote their in-stadium fans as the “twelfth man” - recognizing their “contribution” to the game. So, the context of the game and a guestimate of the ratio of fans in the arena played a factor.
There is also missing data that went into the decision process… Wait! Did I say, missing data?
It isn’t missing, rather it is evaluative data that I used that wasn’t included on the paper - from other sources of “truth”. To share a bit more about me, I’m an athlete and a coach. I have participated in and coached multiple sports at a competitive level and recognize that in addition to the crowd’s influence in the outcome of the game, there are other factors that can affect an athlete’s performance. This leads me to explain why I picked UCLA to win, much as less why other teams advance over others. Namely, I’m familiar with the history of UCLA basketball and the fact that the coaches and the athletes can “call upon” that history to give them a bit of extra boost in their games.
This all leads me to explain more around why I’m using the tournament as a metaphor for a program and the information in the brackets as evaluative data. There are a few lessons to learn here.
- I’m taking into account both the context of the game (program) and incorporating my sources of truth. Good evaluation practices should incorporate what the stakeholders revere for information and data and the context of the the program implementation. Better use of evaluation data for decision making should also take these factors into account.
- I’m clearly using flawed data for my decision making. Just looking at my final game - UCLA versus Michigan. I know UCLA’s history, I don’t know Michigan’s. It may be that Michigan has an incredible history as well, and to be honest, there is a nagging part of my brain that is saying that there is something there as well, something big. However, I made decision to ignore that part of my brain when making my selection.
- Speaking of ignoring information, let’s look at my decision to pick Stanford over Kansas. And this highlights decisions that can be disguised as informed by evaluation, but in fact are made “from the heart”. Living in Missouri, there is a bias against Kansas - don’t ask me where it comes from, but there has been at minimum a rivalry for years between the states’ schools. Add that I attended a Pac-12 school for a period of time and my “allegiances” and thus decision making becomes clear. The lesson here is that while we would like to say that our programmatic decisions are driven by evaluative data, our own biases do creep in. As leaders who use evaluative data to make decisions, we need to recognize our biases and be honest with ourselves and our teams.
It is these lessons that will serve us all well in both working as evaluators, but also as consumers of evaluative information and decision makers. Paying attention to the context of the program, the agreed upon sources of truth, the fact that some information may not be found in the official structures of the evaluation, helps improve our understanding of the program and inform decision making going forward. And even if we have all the data we need, the complexity of the program may result in a different outcome than we expect - as history will certainly prove with my bracket results.
As always, I’m open to your comments, suggestions, and questions. Please feel free to post comments.
The Evaluation Evangelist