Plan a Program Pilot Before Implementation

“Did you ever hear the story about the man who wanted to fly, but was too low on funds to buy an airplane? So he built himself an airplane from a kit he bought on the Internet. When the construction was finished, he took the airplane to the edge of a cliff, got in, started the motor, and drove off the cliff. It was after that when he decided there might be more to flying than just having an airplane.”

Or maybe you heard the story of the organization that decided they wanted a mentoring program in the worst way. Since they did little investigation, had no expert guidance, conducted no program pilot, and just used their own common sense, they got their mentoring program, but it WAS “in the worst way”.  No doubt, afterward, they decided there must be more to effective mentoring than just having mentors and calling it a mentoring program.

When we begin any enterprise, we must make some assumptions about what the needs are we will address, what will best address those needs, and how best to structure things to work together to meet the needs. It is those assumptions which can make or brake our program’s success, so we must not just have assumptions, we must check them to see if they are a valid basis for our work in mentoring. To do that, those assumptions need to be viewed and treated just like a scientific hypothesis. Let’s briefly revisit the scientific method to see how it works.

The Scientific Method

  1. We start by identifying the problem (needs)
  2. Collect information about the problem. (We do this by doing a review of the relevant literature and expert practice to see what others have learned about addressing the problem. That way, we don’t have to use trial and error to discover what works.)
  3. Form the hypotheses (In programs, there are usually a string of related hypotheses. Write out the “if we do ___, then we expect that ___ will happen” statements which use the best practices you have selected that will form the core sequence of the program’s work. Usually these “if-then” statements are a logical series of cause and effects. They are the assumptions we must make as we begin. One example in such a series might be, “If we use effective mentor roles as the basis for our mentor training and if we assess what mentors already know and don’t yet know, and if we design the training to teach mentors only the parts they don’t yet know, then the training will be efficient, the mentors will find it to be of value, and their mentoring will be effective.”)
  4. Test the hypotheses (Conduct a small, then gradually bigger set of experiments to test the series of assumptions you have made. We call that “test” your program pilot.)
  5. Collect Data (Assess the degree of implementation of your plans, the obstacles encountered, the solutions people develop, and the progress toward goals that they make.)
  6. Analyze the Data (Look for meaningful patterns in the data that allow you to reach conclusions about what is happening – basically about the validity of your series of assumptions.)
  7. Reach Conclusions about the Hypotheses (Decide if the series of activities you have assumed are needed really do happen as you expected and if they produce the effects you have expected. Then, write recommendations about what needs to be improved to better attain your goals and meet the targeted needs.)

Conduct a Pilot
Before an organization implements a mentoring program, it is absolutely the best practice to:

  • Expect that mistakes will be made early on, and lessons will need to be learned;
  • Plan BIG, but start small;
  • Begin the program implementation with a pilot program to test your assumptions and allow fixing of any problems before they become big problems.

A pilot typically is planned to begin implementing essential program pieces, then later adding  enhancements in several phases. For example, the first phase (pilot) could include:

  • mentor and protégé recruitment
  • mentor and protégé selection
  • mentor and protégé guides
  • mentor training
  • protégé orientation
  • matching criteria and processes
  • program evaluation (actually, this is a pilot of your evaluation process for evaluating your program pilot.)

If the pilot has areas which did not function as wished, the evaluation data will guide you regarding what needs to be improved. In that case, a further pilot may be the best next step.

If the pilot is successful, you should implement the ‘proven” elements of the program and try to expand the participation and effects.

After the first year or two of implementation, you could move into a second phase by piloting additional needed elements, like career development activities or other training.

A VERY GOOD EXAMPLE OF JUST THIS SITUATION IS THE “CAREER BEGINNINGS PROGRAM” EVALUATION

A well planned pilot should include a process for continuous feedback so improvements can be made at any time a problem is detected. No need to wait until the end of the year to find out. Feedback can be received in numerous forms including:

  • Evaluation forms;
  • Surveys;
  • Interviews;
  • Focus Groups;
  • Observations.

Visit the “Program Evaluation” section of this web site for help in using these methods.

Once the pilot has been completed and all feedback and data have been collected, the data should be presented to senior leadership along with lessons learned and your recommendations for improvement to the program.  Once changes are made and the  program is fully implemented, it is wise to continue to present results, best practices, lessons learned, and further recommendations for improvement to senior leadership.  This is important in maintaining ongoing commitment and support for the program. You will get more support because it will be evident to your organization’s decision makers that you are holding yourself accountable for expected results and making a contribution to the organization by meeting essential needs related to the organization’s mission.