© 2002, Barry Sweeny
An “indicator” is what is actually measured.
Choosing indicators seems hard for many who must plan evaluations, but it is actually quite straight forward:
- Choose observable behaviors, such as are indicated by active verbs: Report, design, collect, compare, describe, explain, or demonstrate.
- Unless you have some experience in evaluation, avoid those “fuzzier” more passive, non observable verbs, like appreciate, or understand.
However, there are times when we want information about a factor which is not easy to measure. That’s when we select an “indicator” of that factor and evaluate that indicator instead as the best possible alternative to trying to measure a “fuzzier” factor.
Here are two examples.
1. We may want to know how many school students drop out of school AND come from families living in poverty. Collecting information on who drops out is very simple, but gathering data on families in poverty would be very time consuming. Also, some families would not willingly tell us their income level, so “level of income” is what we might want to know, but it may not be a good indicator of poverty for us to use for data collection. Instead schools almost always use the number of students who receive “free and reduced lunch” as the indicator to measure for poverty. The data for that indicator is already provided to schools and so is easy to access and use, AND qualifying for free and reduced lunch shows that a family meets the government criteria for living in poverty, the very factor we really wanted to understand in the first place.
2. Measuring satisfaction of employees with web-based training may be somewhat “fuzzy” and hard to observe, so we might feel pressed to just ask for self-reports that tell us if they like it or not. However, personal definitions of what it means to “like” something can result in data that may be almost meaningless. On the other hand, but if we conclude that “percent who finish an e-learning course” is a good observable indicator of satisfaction, we can determine what we want to know rather quickly and painlessly.
Selecting good indicators to assess is a very important step in program evaluation because:
- Poor indicators produce data that may not be related to program factors we can control, and therefore improve
- Poor indicators produce data that may not really help us to improve the program
Selecting good indicators also requires that we choose a range of different kinds of indicators. Doing this helps us to consciously decide what the indicators we choose may actually indicate.
In other words, we could choose something as an indicator, thinking that it is a useful way to assess something. But by looking at the categories of indicators below, we may realize that the indicator we have choosen is actually not a good indicator of what we want to evaluate, OR it is a better indicator of something we had not considered evaluating.
Resources (cost indicators)
• Costs related to each program component
• Leadership salaries & stipends (for % of time)
• Mentor incentives & recognition costs
• Released time allocated for mentoring & coaching
• Cost of training
• Cost of external program evaluator’s time & expertise
System Strategies (process indicators)
• Level of implementation on program ìTheory of Changeî
• # of program model components implemented
• % of proteges at each level of CBAM Stages of Concern
• Percent of training curriculum that is instructor-led versus web-based
Activities/Behaviors (program & behavior monitoring indicators)
• # of coaching cycles completed
• Attendance at trainings, at support group meetings
• # of hours mentors & proteges spend planning instruction
• # of protege observations made of other expert teachers
Contextual (cultural & norm indicators)
• Attitudes toward the Mentoring Program by managers
• Attitudes toward staff development in general by the Board
• Degree of managers’ comfort with informal staff leadership
• # of competing innovations and expectations
Customer Results (outcome indicators)
• % of novice employees retained over last 5 years
• % and # of training participants achieving “mastery” level on a post-training assessment
• # of experienced trainers receiving “High Performer” Certification
• % of participants completing e-learning courses.