Develop & Use a Program Chain of Causes and Effects

Barry Sweeny, © 2010


How we design our program model and program evaluation model are one and the same. Here is the very best practice for aligning those two processes, and for quicker success and sustaining the program long-term.


INDEX:


The Challenges of DOING Program Evaluation

Isn’t it true that the time we need learn how to provide quality processes is in direct competition with the very time we need to do those processes!?

However, if we don’t make the time to evaluate the program for ways to improve what we are doing, this situation may become a fatal flaw. If we do not evaluate and continually improve what we do, we may not be allowed to continue doing it.  Especially in these tough economic times. you must be able to demonstrate the impact of your program if you expect decision makers to continue to support your work. BUT, do not wait for them toask for the evidence.  Prepare the processes, collect and analyze the data, and report on results before they ask for it.

Effective program evaluation has become a vital necessity, not just to become more effective, but for survival!

So… what do we need to do? We need to learn and do BEST PRACTICE PROGRAM EVALUATION, and get it right the first time. We may not have a lot of time.


The Best Practice Solution Simply Stated

Simply stated the KEY CONCEPT is to DESIGN and EVALUATE your program as a “Chain of Causes and Effects“.

Sorry, the above statement is very easy to SAY, but NOT so easy to DO. This is partly because it is complex, although hopefully this article help. It is also partly true because, just like everything else, learning something new takes time. Never fear – IMA is here, and we have the best practices laid out step-by-step for you.

There are four aspects of this task:


1. Define the Whole Chain of Causes & Effects

Improving bottom line results is almost always the real reason for change initiatives. Yet, such results are the last link in a long chain of events that are each necessary to attain improved results.

Earlier links in the chain include work plans that are aligned to the standards, assessment that is aligned to the work flow, employees who have the skills to effectively do that work, employees who actually do the needed work well, sufficient materials and technologies to support effective aligned work, etc.

Before we can expect the last link in the chain to be in place, we understand that every other link in the sequence must first be there.

Here is a diagram of a “Chain of Causes and Effects for Education“.

The Education Chain of Causes and Effects

In this exmple, the left end is the final desired result, so the work to build the program and to start assessing each link would be from right to left along the chain.

Even if you are not in education, YOUR chain of causes and effects would work in a similar way, but just with different labels that fit your context. The point is for you to define what the links are – what are the steps that form the basic assumptions for how your program is expected to function?

Think of it this way, “If we do X, we believe that Y will happen. If Y happens, then we expect Z to happen next. When Z happens, we think A will be the result.”  Etc.

In mentoring, this chain of assumptions may be something like the following:

  1. The foundations of an effective mentor program are the use of best practices drawn from the research to guide decisions, and the use of a developmental model for learning as the basis for program and mentoring design, delivery, and evaluation. >>
  2. A collaborative partnership of committed and thoughtful stakeholders is the ideal advisory board structure to create & support an effective mentoring program. >>
  3. A well supported and guided program director can plan and lead implementation
    and improvement of an effective mentor program, and mentor the mentors. >>
  4. A comprehensive, research-based mentor program plan which is piloted and
    gradually phased-in
    will lead to an effectively functioning mentoring program. >>
  5. A comprehensive, research-based mentor program will effectively recruit,
    select, match, train, support, and recognize effective mentors. >>
  6. Effective, research-based recruitment, selection, and matching will provide adequate numbers of appropriate, committed mentors. >>
  7. Effective, research-based training will adequately prepare mentors to assess and address the needs of proteges, and support and guide them through stages of development to become effective employees and leaders. >>
  8. Effective, research-based mentors,which have quality quality training and skilled use of the developmental model, will help proteges learn quickly how to succeed and contribute. >>
  9. Successful, thriving proteges will perform and produce at high levels, and will have the skills and motivation to support others‘ learning and growth, and to keep learning themselves. >>
  10. High performing, productive, supportive, and continually learning proteges will be retained, will become skillful mentors and leaders themselves, and will help the organization achieve it’s strategic goals.

Write out your” chain” – what you assume will happen, stating it in the sequence in which you expect it will happen if all stages are well done and ending with the desired results defined in your program goals. Feel free to adapt from the above “chain”.


2. Collect Data on Every Link in the Chain

Program assessment must be designed to provide the data we need to:

  • Make well informed decisions and plans;
  • Identify which program components (links) are working effectively;
  • Identify program elements (links) that are missing;
  • Identify program activities (links) which are not working effectively;

However, the data we collect must not be unrelated fragments – the data should relate to the effectiveness of the sequence (chain) of program causes and effercts. Using the “Chain” concept will help illustrate the best practice we need to use..

When we say, “each link works effectively” we mean three things. These three things are what we assess when we look at a program component (link):

  1. The program link knows and builds on the work of the previous links;
  2. The link uses best practices and produces the effect it is designed to;
  3. The link anticipates the next step in the sequence and lays the ground for success in the next link.

Collecting these data require working our way along the “chain of causes and effects”.

  1. We start at the end of the chain which is farthest from the results, and we check each link and the data about that link to see if those data show that link works well by the above definition.
  2. If  the first link is performing as needed, we check the second link, and then each after that.
  3. When we get to a link that it not working as needed, we identify which of the three factors above are not working.
  4. Then, we fix the problem (revise the activity) with that weak link, so that link can cause the desired effects in the next link in the chain.
  5. As we perfect the work of each link, the impact increases and moves farther down the chain toward the planned result.
  6. In this way, we can map where we are in the chain, if progress is being made, where progress is being stopped, and if we are eventually causing the desired effects.

Our evaluation process does not expect to quickly attain the desired results because that’s at the very end (link) of the chain. We understand we have to build a strong chain with each link causing effects that transfer to the nexy link, increasingly passing the positive effects through every link in the chain to finally reach the end. We may wish it were otherwise, but development takes time because change is hard.


3. Assess Early and Middle Indicators For Progress

As we move to this next part of the process, we need to remember that each link in our chain (each program element) must be assessed for strength and for the effects it causes down the chain.

  • Of course, the early links in the chain will become strong and effective
    first. In assessment terms we call these “early indicators”. These
    early indicators tell us that we are some distance from seeing final results,
    but we are also seeing proof that what we are doing is beginning to work –
    so we just need more time. Without this encouragement, we may give up what we’re doing, thinking, “It’s not working!”  Sound familiar?
  • After a while, links further down the chain will become strong and effective too. We call these the “middle indicators“. They are signs that the results we need are getting closer, and again, we need to kep doing what’s working well.
  • Eventually, we near the end of the chain, what we call “later indicators“. These tell us the results we want are almost achieved and it’s time to start watching for them.
  • Finally, doing things well all down the chain will get us to the desired end result.

If we use the 9 link “chain” example we described in section #1 above:

  • The early indicators would be the first links to change, in the following order:
    • Identifying the research and developmental model by which we will be guided;
    • An effective Program Planning Group;
    • An effective program director;
    • A comprehensive, research-based mentor program plan which is piloted
      and gradually phased-in;
  • When the early indicators show us the first few program links are working well, the middle indicators are the next links to change:
    • Effective recruitment, selection, and matching will provide adequate numbers of appropriate, committed mentors.
    • Effective training will adequately prepare mentors to assess and address the needs of proteges, and support and guide them through developmental stages.
    • Effective mentoring,which is based on quality training and skilled use of the developmental model, will help proteges learn quickly how to succeed.
  • When the middle indicators show us the center program links are working
    well, the later indicators will be the next links to change:

    • Successful, thriving proteges will perform and produce at high levels, and will have the skills and motivation to support others’ learning and growth, and to keep learning themselves.
    • High performing, productive, supportive, and continually learning proteges will be retained, will become skillful mentors and leaders themselves, and will help the organization achieve it’s strategic goals.
  • When the later indicators show us the final program links are working well, then it’s reasonable to expect the final desired results to begin showing up.

This illustrates how we can use program evaluation to track the progress of causes and effects down through our program chain. BUT this only happens, and the desired results are only achieved, if we design and build the program and our program evaluation as a chain.


Here are TWO EXAMPLES:

1. Education – When a middle school goes to team-based structures with daily team planning time built in, one of the first things to indicate that the change is starting is that student and teacher attendance improves. Conversely, one of the last things to change will be student learning, the real purpose for the original change to an effective middle school structure.

It may take several years for all the necessary links to be put in place to where teachers are providing improved instruction and, finally, student learning begins to visibly improve. If we watch for improved student learning early on in the process, we may wrongly conclude that “nothing is happening”.

2. Here is a more specific mentoring example. Even if improved employee performance or implementation of training are the desired result from mentoring, the first things that have to change are building a strong, trusting, confidential relationship, setting goals for their work, and meeting sufficient times to gain progress on implementing their work plans. Therefore, some of the first EARLY indicators of change will be frequency and duration of mentoring pair meetings, topics discussed, reports of the quality of the relationship, baseline data collected on competencies, and setting of goals.

MID-process indicators for the desired change may be number of coaching observations conducted and the number of other professional growth plan activities completed. Just keep building the chain.

The big lesson here is, to watch for and capture the early and mid-process indicators in the growth sequence and only expect the later indicators to change when the early indicators are all strong. Here rests the reason for using the developmental model of learning. To effectively track growth through a sequence (chain), we need to know what that sequence is. The developmental model provides a research-based chain for us so we don’t have to figure that out on our own.


4. Try to Anticipate the TIMING of the Whole Sequence

When all the early indicators are strong, we can assume that all the necessary things are in place for later results to improve. Only then is it reasonable and time to expect that productivity and desired results should begin to improve.

Improving final results is a developmental process, just like gardening. We cannot expect the harvest until all the prior steps in the process have had time to happen. In development, you can not plant the seeds and then harvest the crop. This is why time is so critical in development and understanding how long things take is critical to success.

In mentoring the earlier indicators are vast. To understand what should be assessed,make a list of the sequence that you think must occur to get all the links in the chain of effective mentoring programs and practices in place. In other words, what are the necessary steps and how long does it take for:

  • (Business)  Beginning managers to become effective leaders?
  • (Teacher Education)  Novice teachers to become effective at leading students to success?
  • (Youth Mentoring)  Struggling students to learn the discipline, content, and study skills needed to succeed?

After you have created that list, decide what data could be collected about each step in that process. What data will tell you the strength of each link in the chain? What data will tell you if that link (program element) is doing the three factors describe above?

Then you need a schedule that spells out WHEN those data should be collected.

Data for every indicator should be initially collected to give you a baseline against which to measure for tracking progress. Once you have a baseline, predict the points at which (specifically when) you should begin to see some of the earlier and later indicators change. Again, the experience of mentoring program leaders in other organizations is very helpful with such predictions.

Collect the data :

  1. before the change is expected
  2. when it is expected
  3. and after it is expected.

You “bracket” the expected time because you can not assume that your prediction will be correct, and you want the data to tell you when the changes started to happen. Knowing how long the changes take to happen is the basic skill you need to more effectively assess those changes, cause changes, advocate for your program, and show that your program IS effective.


Surprise ! The Ultimate Goal of Program Evaluation is NOT Program Improvement!

Keep in mind that you are not collecting data just to show that the desired early and later changes have occurred. If that is all that you do, you may not learn enough to be able to sustain the changes.

You also what to understand WHEN and (if possible) WHY the changes occurred so that you know what CAUSED the changes. That places you in a much more proactive position. In other words, your goal should not just be to improve results. Your goal should be to learn HOW to conduct a mentoring program that causes improved results.