© 2004, Barry Sweeny
1. “PARAMETERS“ – This step is taken to set boundaries around the work to be done so your intentions are more likey to be impelmented during the process. Examples might include statements like:
- We WILL limit the size of instruments and the time it takes to complete them.
- Therefore, we WILL NOT collect data UNLESS we are sure that we will use it.
- Therefore, we WILL decide in advance, how we will use all the data we plan to collect.
- We WILL provide data summaries, as well as the final conclusions and recommendations to all participants.
2. “STANDARDS” – Standards determine the technical quality of a process, instruments, and the resulting data and decisions. The reliability, validity, and fairness of data and conclusions are examples. The more “high stakes” the use of the program evaluation results will be, the more you need quality.
- HIGH STAKES – If the support needed to sustain funding for your program is at stake, you may need data which are more credible and conclusions which can be demonstrated to be true (eliminating doubts). In that case, you will need to use a higher quality process and instruments.
- MEDIUM STAKES – If your conclusions are to demonstrate the need for greater released time for professional development activities, then you may not have as great a need to demonstrate use of a high quality process and instruments. Failing to win support for such a need does not doom your whole program.
- LOW STAKES – If YOU are the primary audience for the data and conclusions, there may be little or no need to demonstrate the use of a high quality process and instruments.
The problem is that quality requires greater effort and time, so quality costs money.
Examples of an evaluation standard might include:
- A decision to collect data from at least three diverse sources for each research question (to allow comparison of view points).
- A decision to always use both open-ended and fixed choice questions.
BOTH of these are very good practices, but quality = $.