2.2.5 Plan for your Evaluation

Planning your evaluation is essentially determining how you plan to answer the evaluation question(s) your program has identified.

For formative evaluation, your evaluation plan will focus on the implementation of the program, answering evaluation questions related to process evaluation questions.  This evaluation plan is not interested in outcome or goal-oriented evaluation.

For summative evaluation, your evaluation plan will focus on outcome or impact-oriented evaluation, answering evaluation questions related to outcomes and impact evaluation questions.  In summative evaluation, there are two approaches that are common to assessing your program’s effectiveness.  The first approach is to gather data retrospectively (after the fact) on how participants have benefited from participating in your program.  The second approach is to gather data before and after (pre-post).  It is different from the first approach.  It asks participants before and after to determine the extent to which participants have benefited from accessing your program or service. This approach is considered to be more valid because the net change in participants’ knowledge, skill, or behavior can be considered to have resulted from the program than the first approach.

There is a more sophisticated approach known as, Experimental or Quasi Experimental. This approach also focuses on outcome or impact-oriented evaluation. It is similar to pre-post in its approach, but it involves a control or comparison group.  For example, a counselling support may be given to a group of students, while a similar group does not access counselling support. Data is gathered before and after from both groups and results are compared to see the difference with respect to benefits from the counseling support.  This approach is considered to be the most valid.  This approach requires higher levels of evaluation expertise and resources and is less common.  It can be done in collaboration or partnership with academic researchers.  There are some challenges with this approach, especially in community setting, as withholding an essential service from participants for the purpose of conducting evaluation may raise ethical questions.

Once you determine the evaluation plan, (i.e. formative, summative), you are ready to complete it.  An evaluation plan is a written document that outlines the following:

  • Goal of evaluation
  • Type of data to be collected
  • Data collection method to be used (i.e. survey questionnaire, interview/focus group guide etc.)
  • Timeline for data collection and analysis
  • Persons or staff members responsible for data collection and analysis
  • Stakeholders that need evaluation results or findings

Let’s take the case example below as a reference to provide concrete examples of what evaluation designs look like:

Case Example

Erica is a Mental Health Educator at Mapleleaf College and works with the Student Health and Wellness Centre on campus. Over the last three years she has been organizing training sessions for counsellors, clinical staff, administrators and student body leaders to recognize early signs of addiction among students accessing their services. Erica’s planned to provide up to 20 training sessions reaching a total of 50 individuals of representative group of counselors, clinical staff, administrators and student body leaders.  The planned training included a combination lecture and experiential learning sessions that included case scenarios.  Erica invited a diverse group of speakers and content experts to deliver the training.

In this case example, Erica worked with key stakeholders (e.g. program advisory committee, management team, funders) to identify four evaluation questions.  The extent to which:

  1. Training participants are reflective of the intended target group (i.e. counselors, clinical staff, admin, student body leaders)
  2. The training has met its reach and scope of service (i.e. # of individuals trained, # & type of training provided)
  3. Participants are satisfied with the training
  4. Participants are able to recognize early signs of addition after attending the training

Note that each evaluation question will require Erica to plan and do different type of evaluation. For instance, evaluation questions (#1, #2, #3) will require formative evaluation. Evaluation question #4 will require summative evaluation.

Since Erica has been offering this training for the last three years, it is reasonable to assume that she might have done some evaluation.

Therefore, to address evaluation questions 1, 2, 3, Erica would conduct formative evaluation. This will help Erica to understand whether the training has achieved its implementation objective.  Formative evaluation focuses less on training outcomes and more on training process and implementation.

To address evaluation question #4, Erica would conduct summative evaluation. Erica has two approaches to choose from to answer the evaluation question #4 “The extent to which participants are able to recognize early signs of addition after attending the training.” If Erica chooses the first approach, she would assess participants’ ability to recognize the early signs of addiction after participants complete the training. Whereas, if Erica chooses the second approach (Pre-post) she would assess participants’ ability to recognize the early signs of addiction before and after participants complete the training. By comparing participants’ ability to recognize the early signs of addiction before and after, she can show that the net change in the ability to recognize the early signs of addiction can be attributed to the training to a greater degree.

Finally, once Erica identifies the evaluation question(s) and the type of evaluation, she will be ready to develop the evaluation plan. She can complete the evaluation plan (i.e. formative, summative). Erica’s evaluation plan will articulate the following:

  1. Goal of evaluation
  2. Type of data to be collected
  3. Data collection method(s) to be used
  4. Timeline for data collection and analysis
  5. Persons or staff members responsible for data collection and analysis
  6. The stakeholders that need evaluation results or findings
Toolkit: PDF Version What Did You Think of This Toolkit?