Monitor and co-evaluate

An important step in youth engagement is monitoring and co-evaluating your programs and services with youth. This will help you to gather information about what’s working and what’s not, use that information to grow and share your findings with others. Engaging young people as co-evaluators strengthens the evaluation process and leads to important benefits for youth, organizations and systems.

Co-developing an evaluation framework

Evaluation involves systematically collecting and analyzing information to understand whether we are doing what we set out to do and how well (or not) we are doing. Evaluation findings can tell us whether to stay the course or shift gears to improve programs. Evaluation is a way of assessing if and how youth, adults, agencies and communities are changing as a direct result of your interventions. For a comprehensive resource on how to plan, do and use program evaluation, visit the Centre’s Program evaluation toolkit.

General steps for carrying out an evaluation and how it links to youth engagement 

Evaluation steps can be fluid and often overlap. These steps often occur in cycles as new evaluation questions arise and new areas to evaluate become apparent.

Step 1: Bring together a team

Identify all members who will be involved in the evaluation, including youth, leadership and other stakeholders who will be affected by the findings. Determine which skills and perspectives will be required through the evaluation and ensure no voices missing around the table. Once an evaluation team is assembled, get to know each other. Try using one of the ice breakers in our activity section.

    
Evaluation roles for young people:
There is always the potential to move to higher and to more genuine levels of youth engagement in evaluation. As you move along the continuum to higher levels of youth engagement, young people’s roles reflect greater levels of decision-making and leadership. It is important to distinguish between disengaging evaluation activities that involve youth as passive subjects of evaluation and engaging activities that involve youth in the development and implementation of evaluation.

 

 

Disengaging evaluation roles

Degrees of engaging evaluation roles

 

Youth as subjects

Youth as consultants

Youth as partners

Youth as directors

Goals of youth involvement

Gain knowledge about young people

Create youth-friendlier process

Develop skills of youth and include youth voice

Empower youth and create community change

Defining the questions

Adults define questions

Adults define questions

Adults often define questions with or without youth input

Youth define questions with or without adult input

Creating the instruments

Adults find/create instruments

Adults ask young people for feedback on their instruments

Adults and youth may jointly create instruments

Youth create instruments with or without adult input

Collecting information

Adults collect information

Adults collect information

Youth may help adults collect information

Youth collect information, adults may assist

Analyzing information and developing the final report

Adults analyze information and write the final report

Adults analyze information and lead the writing of the final report

Adults take lead in analysis and writing, youth may assist

Youth take lead in analysis and writing, adults may assist

Disseminating findings

Adults disseminate findings, mostly to professional audiences

Adults disseminate findings, mostly to professional with or without youth input

Adults take lead in dissemination, youth may assist

Youth take lead in dissemination, adults may assist. Findings may mobilize other youth or create community change

Roles of young people

Young people are subjects of study

Young people play limited roles as consultants

Youth assist adults in roles such as information collection, writing and dissemination of findings

Youth initiate and take lead in all stages of the process. Adults may or may not assist

Roles of adults

Adults take lead in all stages of the project

Adults play most of the key roles

Adults initiate and implement process, but enlist youth to assist them

Adults may or may not play supportive roles, but youth make the decisions

Adult allies roles in evaluation:Adapted from Checkoway & Richards-Schuster’s evaluation on youth participants in community evaluation 1

 

Evaluation roles for adult allies:

In youth-adult partnerships focused on evaluation, adults have very specific roles that may differ from other types of relationships with youth. Effective roles for adult partners include:  

  • recruiting youth
  • supporting youth: Building relationships, facilitating safer spaces and exposing youth to new opportunities. It’s helpful to have an ally that can meet the young person beforehand and support them throughout the evaluation initiative
  • providing background information about the organization and the goal(s) of the evaluation project
  • establishing and communicating useful parameters about the evaluation project and scope (e.g.  time commitment, roles and expectations, etc.)
  • getting buy-in: empowering youth to take an active role in evaluating the programs and services that are designed to serve them
  • evaluation training. Many youth have never done evaluation. It’s important to provide skill-building opportunities so that everyone on the team can participate meaningfully.
  • networking with others throughout the evaluation and making connection to the bigger picture
  • supporting program improvements
  • preparing youth for the slow pace and potential challenges of social change

 

Step 2: Develop a shared purpose

Set aside time to collaboratively identify the key evaluation objectives of the group. As a group, identify what you want to learn about the program through the evaluation. It is important that throughout this process, the group checks assumptions to make sure that everyone has a common understanding of the terms being used and their implications. The shared purpose, values and principles are a touchstone for the entire evaluation. The group can revisit them at critical points throughout the process. You may want to capture the shared purpose by developing terms of reference for the group and co-developing a workplan.

Step 3: Identify the Theory of Change

Evaluation is stronger if it is based on the theory of your program (i.e. the why and how certain activities will bring about the desired change(s) based on evidence from research, practitioners and clients).

If A then if B then C

A Theory of Change can be simple or detailed depending on the nature of the program, stakeholders’ preferences and plans for its use. There are different ways of developing a Theory of Change but the main step is to identify the underlying assumptions in the if-then chain. This approach works well in the program development stage. With an existing program it can be easiest to begin with developing a logic model for the program which maps out the main activities and desired outcomes of the program (see below). We can then articulate the underlying assumptions in the connections between activities and outcomes (i.e. how do we know that this will lead to that?).

It’s important to note that testing whether or not a program causes particular outcomes is difficult because there are a number of different factors that can play a role in producing different impacts. In order to confidently say that program X caused outcome Y, various controls would need to be put in place through an experimental or quasi-experimental design. However, these design options are rarely feasible or appropriate in a real-world context. Therefore, our theories of change are developed based on the best evidence that is available to us from the research.

Contribution analysis is a relatively new approach that focuses not on identifying causes for different outcomes, but in providing information on the unique contribution of a program and its elements to the particular outcomes it is trying to produce. In other words, while a youth engagement program may contribute to mental health outcomes to some extent, it’s unlikely that it is the sole influence on how a young person is doing. A contribution analysis can help us understand the extent to which the program has helped produce a particular outcome and understand which other factors play an important role as well.

Basic steps of contribution analysis:

  1. elaborating the program’s Theory of Change
  2. identifying other contributing factors
  3. identifying key threats (i.e. other explanations) to the change mechanisms
  4. testing competing explanations.
ACTIVITY: The Reality wheel and Storyboard activities are designed to help you articulate your program model and Theory of Change.

Step 4: Develop a logic model

Once you have set the stage for the evaluation by bringing your team together, it’s time to map out the goals of the program and the activities that take place to reach those goals. This involves creating a program logic model. This is a visual diagram that shows the relationships between the different components of a program and the outcomes you’re hoping to produce by delivering the program. The program logic model forms the foundation for designing the evaluation questions. It isn’t static though—it can and should evolve over time as your program shifts and changes to meet the needs of service users. For the steps involved in creating the logic model, please see the Centre’s Program evaluation toolkit.

Click on the link for a sample youth advisory logic model.

Several large studies have identified and measured outcomes of youth engagement. For a list of outcomes of youth engagement at the individual, program, organization, community and system level visit the section Why it matters.

Step 5: Conduct a literature review

Conducting a review of the research literature can be helpful to your evaluation in many ways, and it should be done as early as possible in the evaluation process. As mentioned above, the research literature can be an important source of information in the development of a Theory of change for the program (i.e. what does the research say about the results of a certain program or activities?). The literature can also provide a wealth of information on the ways other similar programs have been evaluated (i.e. What methods and measures were used?). This information on methodology is particularly relevant when we develop an evaluation framework which is discussed below.

A review of the literature if often a task in the evaluation process that young people can take on quite effectively as they often have ready access to libraries at colleges or universities.

For more information on how to conduct a review of the literature please refer to the Centre’s online learning module: Conducting a literature review.

Step 6: Select appropriate methods

There are a number of types of evaluations, including formative, implementation, process, summative and outcome evaluation. There can be overlap in terms of how these types of evaluations are defined and understood. For the purposes of this toolkit, we will refer to two main types of evaluation: process and outcome. When evaluating your program or initiative, you’ll want to think through how to evaluate both the process (i.e. the way the program is being delivered) and the outcomes (i.e. what changes you are hoping to see as a result of your youth engagement activities).

Process evaluation asks questions that will get at how the program/initiative is working or being delivered. The focus is on describing the program/initiative, the context and how the different elements of the program/initiative are working together. Process evaluation most often occurs when a program/initiative is being newly implemented, but it’s still helpful to ask process-related questions over time since it can help you to adapt and strengthen service delivery in an ongoing way. Process evaluation questions can tell you how closely a program/initiative is being implemented according to plan (fidelity), and how the delivery of the program or operation of the initiative should be changed to improve access and participation for the target population. In other words, a process evaluation tells you what’s working well and where things might need to be changed or improved.

Outcome evaluation describes the effects or results of the program/initiative on the target population. Outcome evaluation tends to occur at a key point in the life of a program/initiative, when outcomes are most likely to be seen and measured. Outcome evaluation questions should be mapped onto the outcomes that you have identified in your logic model. As well, you can ask other outcome questions that are of interest across all stakeholder groups. Measuring outcomes can be complex and time consuming For this reason, it’s important to focus on key outcomes that stakeholders are most interested in when planning an evaluation. This will help you ensure that you reserve time and resources for what’s most important. When planning your evaluation questions, it’s a good idea to have a mix of both process and outcome. This will give you a holistic sense of your program/initiatives strengths and areas for improvement.

Whether you’re looking at process or outcome evaluation, the approach you use is critical since this will frame how you develop questions, the tools you plan to use, how you do your analysis and how you report your findings. A developmental evaluation is ideal for innovation and learning in complex environments. It is well suited for developing programs where there is uncertainty about how best to achieve the program’s goals. Developmental evaluation involves collecting real-time information to inform ongoing decision-making and adaptation.

Generally, developmental evaluation is useful for five main purposes:

  • ongoing development: adapting a program to changing conditions
  • tailoring: adapting a program for a particular context
  • rapid response: adapting a program quickly in a crisis
  • performative development: readying a promising program for the traditional formative and summative evaluation cycle
  • systems change: providing feedback on broad systems change

For more on developmental evaluation, see the Centre’s online learning module: Developmental evaluation.

Step 7: Develop an evaluation framework

An evaluation framework is a document that outlines the questions you would like to answer in your evaluation and how you will go about answering them. The questions can focus on program processes but also outcomes as identified in the program logic model. Information on indicators, methods, measures and data collection details (i.e. who and when) can be included in an evaluation framework.

Define evaluation questions

Many child and youth mental health agencies strive to ensure youth are meaningfully engaged in the creation of the programs they will deliver, but sometimes forget to include them in the evaluation of these programs. Be sure to engage youth in co-defining questions that ask about areas that are working well and areas that could be improved. It’s important to review the shared purpose of the evaluation (as identified in Step 2) so that you don’t stray too far from the original intent of the evaluation. Evaluation questions (particularly the ones focused on outcomes) should align directly with what’s in your logic model.

When developing evaluation questions, consider:

  • What do you already know?
  • What do you need to know?
  • What do you want to know?
  • What will you do with what you know?

Select indicators and measures

An indicator is something that tells you that a particular change has occurred. For instance, if the evaluation question is “Do youth who access a walk-in clinic feel more hopeful after meeting with the counsellor?”, the indicator tell us how we will know this (e.g. the number of youth who indicate feeling more hopeful immediately after the session, as shown on the Walk-In Service questionnaire).

For a list of process indicators specific to youth engagement, visit this site.

Although measures are sometimes developed within an organization, there are many standardized measures that have been developed to tap into a range of child and youth mental health outcomes. The advantage of using a standardized measure is that it is has been proven to be reliable and valid, allowing you to generalize your data to your broader client base. The Centre’s measures database is a growing online directory that profiles measures related to child and youth mental health. This online tool is free to access on the Centre’s website here.

For youth engagement in particular, we've compiled a list of measures.

Choose appropriate methods

Deciding on which method(s) you will use for your evaluation depends on what you are measuring, the purpose of the evaluation, available resources and other factors unique to the agency’s situation and culture (e.g. the type of reporting requirements that your funder has set out). Whether focused on process or outcome questions, the range of methods you might draw from can include interviews, focus groups, questionnaires, observation, surveys and analysis of case notes. Whenever possible, it is best to use standardized measurement tools as this will increase the reliability, validity and generalizability of the data. For more information on how to select measures and methods for your evaluation, watch our online learning module.

Step 8: Determine ethical considerations

It is important for the evaluation team to spend time becoming aware of any ethics-related policies or guidelines in the organization. In some agencies, evaluation activities can’t take place until an ethics review committee or board has a chance to review the plan and ensure that those who take part in the evaluation are protected from unintentional harms. Before administering any measure, it is important to inform participants (in writing) about the purpose of the evaluation and their role in it. That letter should be clear, free of jargon and include a description of participants’ right to confidentiality and anonymity should they decide to participate. The letter should also emphasize that refusing to take part in the evaluation (or choosing to end their involvement at any time) will not affect the services they receive in any way. Watch our e-learning module Let's Get Ethical to learn more about the importance of consent and ethics in evaluation and read more about issues related to consent and ethics in evaluation.

Step 9: Collect data

The evaluation framework will include high level information about who will collected data, when and how. Below you will find specific examples of data collection methods to help you get started. For a more in-depth view, we’ve developed a learning module that introduces you to qualitative and quantitative approaches to data collection and analysis.

Qualitative data

Qualitative data is used for an in-depth description of the quality of participants’ experiences. It is also helpful for understanding quantitative data better, since it can explain why certain numbers look the way they do. Here are some examples to help you get started:

Open-ended questionnaire Online or paper questionnaires that have open-ended questions are ideal for collecting in-depth, written feedback. The Head, Heart, Feet, Spirit sheet is a qualitative questionnaire that has been tested and works well with diverse youth.

Interviews and focus groups  

Interviews involve one participant talking with one interviewer about their experiences with a program. The advantage of using this method is that a program user can respond to specific questions based on their personal perspective in a way that is independent of the views of others.

Focus groups and interviews are methods used to collect qualitative data where participants speak to an interviewer and answer questions about a program/initiative. In focus groups, participants can interact with one another and build on each other’s responses. They can be cost-effective, since a greater number of insights can be gathered from a group of people in a relatively short time period.  

Arts-based techniques 

Arts-based techniques use creative activities to collect information about a program. They offer alternatives for youth who feel less comfortable with expressing themselves verbally and in public. Using art as a way to reflect can be very emotional and can provide insights into participants’ deepest feelings. These techniques can be used in many ways throughout the evaluation process. For example, photography can be used to prompt or elicit ideas, express how someone feels, document experiences and/or communicate results. Using theatre activities to explore evaluation questions can also be an effective method in a theatre-based program because it is based on young people’s interests and skills.

  • Drawing and collage:  Drawing and collage involves making visual representations of program experiences.  
  • Photovoice: A qualitative method that involves participants in data collection by having them photograph their everyday realities, participating in group discussions about the photographs and communicating the results to people who can help make change.
  • Photo elicitation: Photographs can be used as prompts when you interview others, as they can be used as starting points for discussion or as tools to evoke memories of the program.  
  • Theatre: Theatre performance as an evaluation tool involves acting out scenarios and exploring concepts, experiences, feelings, roles and characteristics of a program/initiative. Participants use their bodies and emotions to create something new, and in doing so they often gain a new understanding of their context. Theatre performance is particularly useful to exaggerate important details for analysis.  

 

Qualitative data debrief: After each round of data collection, gather the evaluation team and go through the data in small teams, then share back to the larger group. Here are some discussion questions 2 to help kick start the discussion and analysis:

  • How did this method of collecting data go? 
  • Were there any challenges?
  • What stands out to you?
  • What did you learn?
  • Any surprises or patterns?

 

Quantitative data

Quantitative data is all of the numerical information you have collected about the program. Quantitative data is useful for providing a high level picture of how a program is functioning, what’s working well, where you can improve, etc. It  is most valuable when combined with qualitative data that offers more depth and nuanced knowledge to complement this information. Here are some examples to get you started: 

 

Surveys/
questionnaires
Surveys/questionnaires is a common and efficient way to collect data. Participants respond to a series of questions about their experiences with a program. This information is rolled up and reported across participants. Program users can complete surveys/questionnaires in a number of ways: online, on paper, with support from a member of the evaluation team, etc.

TIP: Try using the human graphing technique as more interactive way to retrieve survey results.

Some online software (e.g. Poll Everywhere, Turning Point) can provide immediate results for those analyzing data, and can be accessed using mobile phones. Other online software makes it possible to not only collect and gather data into a database, but also to conduct some simple analysis (e.g. Survey Monkey, MailChimp and Fluid Survey) to share back immediately with participants and evaluators.

Step 10: Analyze and interpret data

Qualitative Analysis

Qualitative analysis is the process of working with qualitative data to gain an in-depth understanding of the experiences of users. The evaluation team can read through all of the responses and look for patterns that surface across several data sources. They can move the data around and group them, use colored dots or symbols in the margins, use highlighter markers or other visual ways to find the patterns. They can name each of these patterns (codes). Here are some discussion questions 3 to help kick start the analysis:

  • What generally is being said? Is it positive or negative?
  • What surprised you the most? Why?
  • What responses didn’t surprise you? Why?
  • What patterns or themes emerge: What words, issues, or responses keep coming up?
  • How do these patterns relate to the overall evaluation question?
  • Are there any deviations from these patterns?
  • Were there any differences in the data that could be attributed to a specific group based on demographic data?
  • Are there any interesting quotes or stories that illustrate a particular pattern? Do they speak to the overall evaluation question?
  • Do these patterns match other findings?

The discussion across members of the evaluation team is critical in comparing and contrasting different viewpoints, looking for similarities and differences and arriving at a sense of shared meaning that is used to explain the data. 
A common qualitative analysis software programs is NVivo (QSR International). This program is complex, costly and may be better suited to very large data sets for research purposes. Additional programs, many of which are free, that you may want to consider are accessible from: 

Whether you use one of these software options or simply use a word processing program (like Microsoft Excel) to help organize your data, it’s important to remember that these tools don’t do the analysis for you. What they can do is help to arrange your data according to different key words or themes. Once data is organized in this way, the evaluation team needs to work together to make this information meaningful. For more information about how to analyze qualitative data, have a look at our learning module on this topic.

Quantitative Analysis

Quantitative analysis can be quite complicated. However, simple tallies and comparisons can provide useful information and are relatively straightforward. Although not for everyone, some young people really enjoy inputting data and doing quantitative analysis once they know the basics of what’s involved. 

Steps for simple quantitative analysis: 

  • Number all surveys (if not already numbered with user ID codes) so that you will notice if one is missing and to help you return to a particular survey later if needed.
  • Make a blank copy (or several if breaking into smaller groups) of the survey or add survey questions into a spreadsheet (if not already automatically entered as with online surveys).
  • Ideally, input all data into the spreadsheet for each respondent.
  • If possible, split the data so that smaller teams can work through different questions.
  • Question by question, tally up the number of times people selected each response in the spreadsheet. This is an analysis of frequency (i.e. how often participants select a particular response).
  • In some situations, where questions are rated on a numbered scale, it may be useful to also calculate the average of all of the response (this is called the mean).
  • In order to compare the answers of different subgroups, sort data based on demographic/background data (e.g. age, length of time in program, gender, ethnicity, etc.). For example, if you want to see if gender has an influence on responses, include separate columns (e.g. men, women, transgender). Go through the questions and again tally up the responses for each gender category.  When you have finished, double-check the figures to see if they add up to the total. This is called a cross-tab.
  • Build summary statements about the data. Include total numbers of responses for each piece of information. This sample is identified by an (n) and is often presented in parentheses. If there is a large number of respondents (>100) or if it is useful for comparisons, also include percentages. For example: Almost all of the young people (95%, n=110) responded that the program gave them more confidence to talk about mental health.
  • Graphs and/or tables are a helpful way of summarizing information, so encourage the team to consider whether this might be useful for reporting data. 

 

TIP:  Encourage the group to consider the findings and decide if there are other questions they would like to ask of the data. For example, they may wonder whether the length of time involved in the program was related to better outcomes. These questions may only become clear after the first round of analysis.

 

Co-evaluators and evaluation participants have valuable contributions to make when interpreting the data and its relevance to the program. Present the findings to them for further analysis. For more in-depth quantitative analysis, consider partnering with experts who have access to analytical software or particular training to work with you. This may include student researchers, university partners or organizations with quantitative or evaluation expertise

Step 11: Use and communicate results

The ultimate goal of evaluation is to identify what’s working so that those activities can continue, and identify things that need to changes so that these can be adapted (or stopped). In order to do this, you’ll need to share what you have found in a way that helps people make change. At the start of the evaluation project, encourage the team to consider different ways to share evaluation findings to ensure that they’ll be used. Over the course of the evaluation project, these plans can be refined or shifted. Consider the following questions when developing an action plan with your key program stakeholders to make use the evaluation findings and share results:

  • Who are the main audiences for our messages?
  • What are our goals in sharing evaluation findings?
  • Who are the range of stakeholders that will be interested in and/or influenced by the findings? 
  • What are the key messages contained in the evaluation findings?
  • Who needs to know what?
  • How can each audience best be reached?
  • How can we best present this information to enhance understanding and use?
  • How will we share sensitive or negative results?

Please visit the Centre’s knowledge mobilization toolkit for a step-by-step worksheet to help guide your efforts to share knowledge and make valuable practice change. You can also watch our learning module to help you figure out what to do and how to use the data you have collected.  

  • 1. Checkoway & Richards-Schuster, 2003, p. 25
  • 2. Sabo Flores, 2008
  • 3. Sabo Flores, 2008