1. An “indicator” is a means of measuring progress and attainment of success – a goal, an objective, or implementation of an activity, for example.
The term is better understood when we rephrase that definition as follows:
2. An “indicator” is the tool you use to indicate when a goal has been reached, a project is done, etc. It tells you when your work has accomplished what it intended to do.
3. A third way of thinking of the indicator is to just define it as an assessment instrument, such as:
- a rubric with levels of quality for each factor assessed;
- a written test, whether pencil and paper, or online, generally with items that have only one right answer;
- an observation or a tasks checklist, also with items that have only one right answer;
- a Likert-type survey with items to be rated 1-5;
- a list of performance competencies or standards framed as a check off list;
- a list of performance competencies or standards with a 1-5 rating scale for each.
WHY MAKE SUCH A BIG DEAL OUT OF DEFINING AN “INDICATOR”?
1. The term “indicator” is often used in assessment and evaluation processes and guides, but those won’t help you much if you don’t fully understand what it is.
2, Also, when applying for program approval, such as at the state or provincial level, or when applying for a grant, the term “indicator” is often used, so you’ll need to know it to use it.
Can you just call it an assessment, or a measure? Sure, but “indicator” is the term we use in this Program Evaluation Section of the IMA web site.
Program evaluation is ALL about getting quality, trustworthy information to guide program level decisions and planning. That’s it.
Therefore, what is needed regarding indicators is to be clear about what you need to measure to be satisfied that what you wanted to happen really did. It’s all about what YOU and your team will accept as sufficient, believable evidence of progress, or goal attainment, of implementation, etc.
Many tools are out there and are shared across programs. The only problems with adopting someones’ tool are:
1.To be sure that, IF it is a commercially produced assessment, that you are not participating in copyright “piracy” by using it. That may mean contacting the publisher, if you can figure out who that is, and paying a fee to use it in a PILOT for a very limited number of people and very limited time. Try to negotiate a smaller fee until your use of it demonstrates it’s worth buy and paying for unlimited uses and time.
2. To be sure it will do what YOU need, and that the results of using it will be credible for your team. The best way to know that is to pilot test the tool on a small, cheap, and short-term basis, then look at results and ask if it seems to do what it is supposed to do and what you expected and want it to do.
IMA has provided some tools in the “Tools” section of this web site, found in the top navigation bar at the top of every web page on this site. Perhaps what you need is already available. ALL of what is there has been tested to be reliable and bias free. Also, ther rights to use thes have been negotiated for you, and all you need to do is cite the IMA web site as the source.
A third category is to develop what you need for your self. If you do that, be sure to pilot those items.
Telling you HOW to develop your own assessments is not the role of this web site but many are out there, especially on web sites for educators.