Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Evaluation Guide of Guides

355 visualizaciones

Publicado el

This annotated compendium of evaluation planning guides can help you understand the basics of conducting an evaluation; learn how to create a logic model and indicators; understand evaluation terminology; develop performance management metrics; and evaluate your research, knowledge translation and commercialization activities, outputs and outcomes.

  • Get access to 16,000 woodworking plans, Download 50 FREE Plans... ◆◆◆
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí
  • The #1 Woodworking Resource With Over 16,000 Plans, Download 50 FREE Plans... ♣♣♣
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí
  • Sé el primero en recomendar esto

Evaluation Guide of Guides

  1. 1. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides By: Anneliese Poetz, David Phipps Photo by Anastasia Petrova on Unsplash Introduction As a researcher, or researcher-in-training, you may be aware of the shift in the expectations of research funders toward a desire for return on investment in the form of societal benefit(s). For some researchers, this may be a foreign concept, and one that can create a certain degree of trepidation. Evaluation is closely linked to planning. The purpose of discovery-based research is discovery (e.g. cure for a disease) which may or may not lead to societal benefit, making a promise or plan for ‘impact’ difficult. Knowledge translation including stakeholder engagement and evaluation (understanding logic models and defining indicators) typically do not comprise part of the curriculum in graduate school or post-doctoral studies, further complicating your ability to plan for, articulate and demonstrate im- pact. So, how do you know that what you’re proposing and/or doing has potential for creating societal impact? What is impact? How do you demonstrate to a funder or potential collaborator/partner that your project is worthy of consid- eration? What is a logic model and how does it relate to planning for and evaluating the impact(s) of your research project? How do you report on your progress using quantitative and qualitative indicators? This Evaluation Guide of Guides1 will help you answer these questions. Specific concepts you will find valuable for (research, KT) project plan- ning and evaluation include: understanding logic models, indicator development, and the role of stakeholder engage- ment. Understanding these concepts can help you with your planning (grant applications) and evaluation (reporting) for your research projects. This Evaluation Guide of Guides can be used in conjunction with the KT Planning Guide of Guides and the Stakeholder Engagement Guide of Guides. The framework for research, knowledge translation (KT) and commercialization for Kids Brain Health Network is called the Co-Produced Pathway to Impact (CPPI). It comprises five stages from left to right: research, dissemination, uptake, implementation and impact, and emphasizes stakeholder engagement and co-production throughout (see Figure 1). The framework can be operationalized for an individual research and/or KT project, a collection of projects such as those within the Autism Spectrum Disorder program within KBHN, or an entire organization’s portfolio such as the research, knowledge translation and training programs that comprise KBHN’s focus as a Network of Centres of Excellence (NCE). 1 The KT Core within Kids Brain Health Network provides a suite of services including evaluation. Over the past 5 years, the most frequent service requests for evaluation have been for evaluating an event (such as a conference, stakeholder meeting, etc.) or developing indicators for research and KT planning and reporting. This guide will focus mainly on the latter for 2 reasons: 1) a needs assessment survey recently conducted among trainees indicated they want to know more about planning and evaluation for applying for grants, and 2) it is currently the most salient topic concerning evalu- ation within the Network. However, the same evaluation concepts can be applied to event planning and evaluation (e.g. identify target audiences, do needs assessment, plan the project and measure success). The authors thank Jeannie Mackintosh, Amber Vance, Stacie Ross for their assistance in the development of this guide.
  2. 2. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 2 RESEARCH BENEFITS UPTAKE BENEFITS DISSEMINATION BENEFITS IMPLEMENTATION BENEFITS RESEARCH IMPACT STAKEHOLDER ENGAGEMENT DISSEMINATION IMPLEMENTATION STAKEHOLDER NEED(S) UPTAKEUPTAKE Figure 1: The Co-Produced Pathway to Impact (CPPI) Framework The CPPI contains an invisible or ‘underlying’ logic model which makes it useful for both planning and evaluation. In program evaluation, a ‘logic model’ (also referred to as a ‘theory of change’ or ‘results chain’) comprising the same stages: inputs, activities, outputs, outcomes (see Figure 2 below). The creation of meaningful indicators with readily available data sources is encouraged for each stage. By defining indicators for each stage of the project/program, the project’s progression can be monitored and reported on in detail. Indicators can serve as reminders of which data to collect as the project progresses (formative evaluation2 ) to facilitate ongoing reporting and can also identify the benefit(s) the project has achieved for society (summative evaluation3 ). To this end, evaluation planning for research, KT and evaluation can facilitate pivots in the project’s focus toward maximizing the potential for impact by responding to the results of ongoing monitoring and evaluation. Said differently, knowing what is working, and what isn’t, can help you know where to focus your attention and resources to ensure your team’s efforts lead to the greatest benefit(s) for society. The definition of, and relationship between, elements of a logic model are: Inputs: facilitate activities to be accomplished (e.g. financial and/or human resources, knowledge, etc.) Activities: enable outputs to be created (e.g. writing of a clear language summary, creation of an infographic, design and testing of prototypes of commercial products, etc.) Outputs: the output would be the ‘thing’ created, either a product (commercial product or knowledge product such as a summary or infographic) and if used, facilitates an outcome (change) to occur Outcome: is an ‘impact’ or change that occurred (e.g. in individual behaviour, individual health status or quality of life, the way people approach their work, to existing policies or the creation of new, etc.) 2 Formative evaluation definition: “An evaluation conducted for the purpose of finding areas for improving an existing initiative” (source: CIHR’s “A Guide to Evaluation in Health Care”) 3 Summative evaluation definition: “An evaluation that focuses on making a judgment about the merit or worth of an initiative. This form of evaluation is conducted primary for reporting or decision-making purposes.” (source: CIHR’s “A Guide to Evaluation in Health Care”)
  3. 3. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 3 INPUTS ACTIVITIES OUTPUTS OUTCOMES Short-term Medium-term Long-term Figure 2. The basic parts of a logic model and their relationship to each other (inputs enable activities, activities lead to outputs, outputs lead to outcomes) In the CPPI, as in all the program evaluation guides reviewed, the first stage is (ideally) identifying the stakeholders who will use the findings and engaging them early and throughout to ensure the work is (and remains) relevant for addressing their needs. Importantly, all program evaluation guides necessitate stakeholder engagement early on for needs assessment to ensure relevance and usefulness of the intended outputs and outcomes of the project/program. While the program evaluation guides are specific to evaluation planning, the concepts are transferable to research and KT planning in that program evaluation begins with: 1. identifying target audiences/end users/stakeholders 2. needs assessment 3. planning of the project (inputs, activities, outputs, anticipated outcomes) with the goal of creating useful information, solutions, recommendations for the end-user(s). The guides for evaluation in this document, consider ‘impact’ to be evaluated at the end of the project, as a ‘summa- tive’ evaluation. While the CPPI as illustrated, places ‘impact’ at the right side of the diagram (see Figure 1), which may also appear to suggest that impact(s) can only occur after a project’s completion, impact(s) can (but don’t nec- essarily) occur throughout a project’s progression. This is why we distinguish between short-, medium- and long-term impact(s). An example of short-term impact that occurs before the project is completed could include an increase in quality of life experienced by participants in a study after receiving an intervention or using a prototype of a com- mercial product. Therefore, both a ‘formative’ (ongoing) as well as a ‘summative’ (at the end) evaluation approach is recommended for research and associated KT project(s).
  4. 4. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 4 Figure 3 provides more detail about the relationship and characteristics of a logic model for a program. These char- acteristics can transfer to any research project, program of research, KT project, KT program or research project/ program with embedded KT goals and activities. Inputs include: identified stakeholder need(s) and other information from stakeholder engagement, funding, existing research (for synthesis), research questions, expertise, personnel and other resources that enable stakeholder en- gagement, dissemination and/or the research activities. Activities, including stakeholder engagement, can occur at any stage of the project. Typically, academic researchers focus their efforts on the beginning stages (research, dissemination, stakeholder engagement, including co-produc- tion with a partner) with more of an advisory role in the uptake and implementation phases. This shift in control over the project is illustrated in Figure 3. Whereas academic research projects comprise the ‘supply’ side (supplier of new knowledge and/or evidence-based products/interventions) and are in full control of the project until knowledge/prod- ucts are produced, once these outputs are used (implemented) the researcher’s role and control is reduced. For ex- ample, an intervention may be implemented into practice and, while it is the implementing organization that controls who they hire, the processes and availability of service delivery, etc., the organization may consult with the researcher as an expert who can help ensure that the intervention is being delivered correctly. Outputs can be generated during the dissemination stage (such as videos, infographics, clear language summaries of research findings, a training manual, etc.), and the uptake and implementation stages (such as a workshop to train frontline workers on how to use the training manual, or custom worksheets created for delivering a workshop in a specific context). Finally, whether outcomes occur in the short-term (less than 12 months), medium-term (between 1-5 years) or long- term (5+ years), these changes are all considered ‘impacts’. While the ‘outcomes’ arrow points to the ‘impacts’ box at the right side of the diagram, impact(s) can occur at any stage of the process and can be discovered through ongoing stakeholder engagement. Financial, human and other resources mobilized to support activities Begin by stating your goal(s) which should be based on stakeholder- identified need(s) Actions taken or work performed to convert inputs into specific outputs Products resulting from converting inputs into tangible outputs Use of outputs by targeted population The final objective of the project Long-term goals Budgets, staffing and other available resources Implementation (SUPPLY SIDE) Results (SUPPLY SIDE + DEMAND SIDE) Series of activities undertaken to produce goods and services Goods and services produced and delilvered, under the control of the researcher Not fully under the control of the researcher Changes in outcomes with multiple drivers INPUTS ACTIVITIES OUTPUTS OUTCOMES FINAL OUTCOMES Figure 3: Defining and describing each section of the logic model, and the relationships among them. Inputs enable activities, activities lead to outputs, short-term outcomes include use of the outputs and long-term outcomes refer to changes that result from the use of the outputs
  5. 5. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 5 In a recent request for applications process, KBHN asked researchers to chart their proposed research and KT projects onto a table that looked very much like a logic model: goal/objective, activities, outputs and outcomes (short-, medium- and long-term). This table has now been adapted for each project: 1) details within each stage were adapt- ed to be ‘indicators’ (# of…, narrative about…, etc.), 2) a column was added to the right side of the table for research- ers to use for reporting on the indicators at each stage. Each row in their table identified the objective, activity, output, (anticipated) outcome(s) and the time-frame (by quarter) within which the activities were to be completed. By identi- fying the time-frame for completion, this adds an element of project management to the monitoring and reporting pro- cess. Asking researchers to plan (and report on) their projects using a logic model has enabled KBHN to i) operation- alize the CPPI for managing its program of funded research projects, ii) facilitate monitoring (project management) by adding to the planning and reporting template a time frame for completion, and iii) obtain detailed ongoing reporting on indicators identified for each stage (activity, output, outcome). Figure 4 shows KBHN’s reporting template which we (the KT Core) created as a modification of the original table used during the funding application process. Stages of Co-Produced Pathway to Impact (CPPI) framework: Co-Production (stakeholder engage- ment including partner engagement Research Dissemination Uptake (are people considering how to use your outputs?) Implementation (how are people actually using your outputs?) Impact(s) (what has or will change/be different?) [short-, med-, long-term] FOR REPORTING by PROJECT TEAMS Please: • type updates for each activity in point form • be as detailed as possible: what, where, who (names of people and/or organizations), why, when how, etc. • iinclude data if available, for indicators • provide narrative descriptions if possible, for qualitative indicators • expand the size of the box if you run out of room Activit(ies) Output(s) Outcome(s) / Impact(s) Time frame Detailed updates on activities, outputs, outcomes (to be completed for KBHN HQ every 6 months) Activity: Outputs: Outcomes/Impacts: Activity: Outputs: Outcomes/Impacts: Activity: Outputs: Outcomes/Impacts: NOTE: Please ensure data sources that are readily accessible for all indicators in the table, to facilitate reporting on them. Figure 4. Reporting template for KBHN research projects, based on a logic model. The Activity, Output(s), Outcome(s)/Impact(s), Time frame, and related content originates from the grant application template in which researchers previously articulated their plan for their project(s). Coloured sections above map the stages of the CPPI framework onto the corresponding logic model compo- nents below. The purple co-production stage is positioned at the left, but is relevant to all stages of the project (as depicted in the framework in Figure 1). Project teams use this table to report on the activities they proposed in their original application, including commenting on any shift or adaptation in their activities and/or outputs in response to stakeholder input and/or normal progression of the project. This table represents the interconnection between planning and evaluation of research/KT/commercialization projects.
  6. 6. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 6 Engaging stakeholders A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions Organization: Robert Wood Johnson Foundation Author(s): Hallie Preskill, PhD, and Nathalie Jones Pages: 48 pages What is this about? • This guide begins by explaining the value of stakeholder engagement in developing evaluation questions, the role of evaluation, the process, and stakeholders as intended users of evaluation findings • Step-by-step guide for involving stakeholders in the development of evaluation (and research) questions. Steps include: 1) prepare for stakeholder engagement, 2) identify potential stakeholders, 3) prioritize the list of stakeholders, 4) consider potential stakeholders’ motivations for participating, 5) select a stakehold- er engagement strategy • Good evaluation questions generate useful, relevant and credible evaluation findings, and the only way you can develop good questions is to engage the stakeholders, the intended users of the findings • Challenges to engaging stakeholders • Contains worksheets, a case example, and a list of evaluation resources How can you use it? • Read and consider the section on the value of stakeholder engagement for understanding the value of thinking about stakeholders as end-users, and crafting your evaluation you’re your research and KT proj- ect) in response to their needs and intended uses of the findings • Use the step-by-step guide for involving stakeholders, in conjunction with KBHN’s Stakeholder Engage- ment Guide of Guides, to plan the stakeholder engagement strategy for your research, evaluation and knowledge translation activities • Use the worksheets in KBHN’s Stakeholder Engagement Guide of Guides in conjunction with the ones at the end of this guide to plan your stakeholder engagement strategy • Use Figure 5 on p.16 to help you think through which types of stakeholders you imagine would use the findings of your research/evaluation/KT products, and the tip sheet on p.17 to ensure strategic thinking, diversity, etc. of stakeholders you choose to involve • Use the tips on p.18 to prioritize stakeholders, and the tips on p.20 to guide your thinking about your stakeholders’ motivations for becoming involved in your project • Consider the criteria on p.20-28 to help you think through which stakeholder engagement methods you can realistically use, based on things like: amount of time you have, budget available, geographic loca- tions of stakeholders, range of stakeholder perspectives needed, stakeholder availability, number of stake- holders you need to engage, and degree of complexity of your project. Use the tips on p.29-30 • Read about challenges to engaging stakeholders and how to overcome these challenges on p.30-32
  7. 7. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 7 Indigenous Approaches to Program Evaluation Organization: National Collaborating Centre for Aboriginal Health Author: not listed Pages: 8 pages (to download .pdf, click on “down” arrow on top left side of issuu viewing window) What is this about? • This short evidence-informed guide explains what evaluation is, why to evaluate, who is involved in a program evaluation. It also discusses types of evaluation activities, such as: • needs assessment; • assessment of the ‘program theory’ (logic model), which refers to the design of the program (or research/ KT program/project) including goal-setting for facilitating impacts for specific populations; • assessment of the management processes of the program/project; • assessment of impact(s) by measuring outcomes in terms of benefits to the targeted populations, not just use of the materials or services (which would be dissemination, uptake and implementation); • assessment of efficiency relating costs to benefits • Explains what an evaluation framework and program logic model is • Describes an Aboriginal research/evaluation framework which emphasizes the involvement of Indigenous communities in the design and development of the research and evaluation frameworks and processes to ensure cultural sensitivity/safety and relevance to the communities • Emphasizes the importance of involving First Nations, Inuit and Métis people in co-producing research (including, KT and evaluation) rather than just participating in it How can you use it? • Learn about the history of the engagement of Indigenous communities in research and the importance of early and meaningful stakeholder engagement in research and evaluation • Learn the 4 R’s of developing academic research and procedures in an Aboriginal context: Respect, Rele- vance, Reciprocity, Responsibility • Trust and relationship-building are key for any successful research, KT and/or evaluation project. Consid- er adopting the stakeholder-centred approaches outlined in this guide as best practices for stakeholder engagement for all types of research, KT and evaluation projects/processes beyond those which involve Indigenous communities
  8. 8. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 8 Evaluation Planning Program Evaluation Toolkit Organization: Ontario Centre of Excellence for Child and Youth Mental Health Author(s): not listed Pages: 1-27 What is this about? • This short guide provides a high-level overview of what evaluation is • For planning an evaluation, this guide includes steps for conducting an evaluation; ensuring stakeholder involvement throughout the process; developing a logic model; identifying evaluation questions; conduct- ing a literature review; identifying indicators and outcome (impact) measures; considerations for ensuring validity and reliability of data; and practical considerations for data collection, budgeting, etc. • For doing the evaluation, this guide includes information on evaluation frameworks, procedures, and quantitative and qualitative analysis • For using the results of the evaluation, this guide includes a chart that identifies suggested target audienc- es for your evaluation such as funders, a board of directors, staff, public, families and children, partners, government, etc. How can you use it? • Learn about the important aspects of evaluation such as how to plan and do an evaluation, and facilitate the use of the results • Learn more by clicking on the links throughout this guide, to other toolkits and examples • Save time by using the links provided within this guide to find more information on literature review data- bases, data collection and analysis methods, self-assessment tools, and statistical software
  9. 9. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 9 A Guide to Evaluation in Health Research (can download .pdf from this link) Organization: Canadian Institute of Health Research Author: Sarah Bowen, Associate Professor Department of Public Health Sciences, School of Public Health University of Alberta Pages: 55 pages (in .pdf version) What is this about? • This evidence-informed guide defines what evaluation is, explains why it is important for researchers to understand how to conduct an evaluation, the similarities and differences between research and evalua- tion, common misconceptions, and outlines different evaluation approaches • Explains the broad purposes for doing evaluation, to: make a judgment, improve a program, support de- sign and development, create new knowledge • It is similar to planning for research and/or KT in that it outlines how to: identify intended users, create a process for collaboration, build the team, gather data, co-construct the evaluation, and describe the pro- gram (research program/project along with KT activities) using a logic model How can you use it? • Learn why evaluation is important and the types of evaluation • Learn how to design an evaluation from: purpose/goal to developing the questions, identify data sources, methods for data collection, indicators that make sense for your project, budgeting, carrying out activities and communicating (disseminating) findings • Learn about special issues in evaluation such as: ethics and Research Ethics Board review, evaluating in complex environments, the complexity of ‘causation’, (health) economic evaluation, and the concept of the ‘personal’ factor • Use the glossary to understand evaluation-specific terminology • Use the evaluation checklist for developing and assessing an evaluation plan, to ensure you have consid- ered all the important aspects of a complete and robust evaluation • Use the evaluation planning matrix (Appendix A) and the example evaluation planning matrix (Appendix B) to sketch out a complete plan for your evaluation • Learn more by consulting the additional resources listed on p. 42 of this guide
  10. 10. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 10 Evaluation Practice Handbook Organization: World Health Organization Authors: Maria Santamaria, Marie Bombin, Guitelle Baghdadi-Sabeti, et al. Pages: 161 pages What is this about? • This guide begins with definition and principles of evaluation, evaluation culture and organizational learn- ing, using a participatory approach • Most of this guide is about preparing and conducting an evaluation: developing evaluation questions, preparing terms of reference, choosing the methodology for data collection and analysis, estimating costs, project management and managing the team, managing conflicts of interest, identifying the information needs of stakeholders, ensuring quality and writing the final evaluation report • The guide ends with a section about dissemination, or, communication and utilization of the evaluation results How can you use it? • Skip over pages 8-13 because this is specific to evaluation at the World Health Organization • Use the figure and example on p. 99-106 to understand and address the limitations of a participatory approach to evaluation (and research/KT), the first step is to ensure the work focuses on the needs of stakeholders • Use the checklist on p.113-120 to develop a robust evaluation workplan (terms of reference) • Learn about different methods for impact evaluation by reading p.121-122 • Determine whether you have the right competencies on your team to conduct evaluation activities, using the list of competencies for evaluators on p.123-124 (skip over the first section which is specific to WHO) • Use the checklist for evaluation reports on p.131-138 to ensure your report is complete • Learn the terminology used in impact evaluation by reading the glossary of terms p.139-151
  11. 11. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 11 Guide to Monitoring and Evaluating Knowledge Management in Global Health Programs Organization: Center for Communication Programs, Johns Hopkins Bloomberg School of Public Health Author(s): Saori Ohkubo, Tara M. Sullivan, Sarah V. Harlan with Barbara K. Timmons and Molly Strachan Pages: 136 pages What is this about? • This comprehensive guide details the different types of indicators you can use to measure 1) Process (knowledge assessment, generation, capture, synthesis, sharing), 2) Outputs (reach and engagement/ dissemination, usefulness and quality), 3) Outcomes such as learning and action (awareness, attitude, intention, decision-making, policy, practice) • Contains a logic model for knowledge management (KM) with inputs, outputs and (initial, intermediate and long-term) outcomes How can you use it? • Use Table 1 to learn about different data collection methods, their strengths and weaknesses and costs • Use Table 2 on p.15-16, p.17, p.33, p.45, p.53, and p.70-78 to see examples of process, output, and out- come indicators that can inspire ideas for creating your own indicators that make sense for your research/ KT project • Use the reference list on p.61-65 as a starting point for reading about this topic is more detail • Read the glossary p. 66-69 to understand the terminology related to indicators and evaluation • Use the table on p.79-82 to get ideas for evaluation questions, and what you can look for at different stages in the uptake and implementation process (referred to in this guide as ‘knowledge management capacity’) • Use the tools in the appendices: p.70 contains a list of indicators that can give you some ideas for creating your own indicators, p.96 for background information on usability assessment and p.100 is a tool you can use with your stakeholders to assess the usability of a KT product
  12. 12. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 12 Evaluation Guide Organization: The Government of Western Australia Author: Program Evaluation Unit, Department of Treasury, Government of Western Australia Pages: 73 pages What is this about? • This comprehensive guide includes definitions and a glossary of terms for evaluation • Explains the role of evaluation for decision-making such as its use within the policy cycle and the budget cycle • Explains characteristics, scale, and types of evaluation (e.g. formative/developmental including cost-bene- fit analysis; process; summative/impact) • Details rigour, utility, feasibility and ethics in evaluation • Explains the evaluation process in 5-stages: 1) scoping, consultation and agreement (defining the purpose and goals); 2) planning (budget and methodology); 3) conducting the evaluation (questions, data collec- tion and analysis); 4) reporting and making recommendations (helping to inform decisions based on the findings); 5) implementation planning (ongoing monitoring to ensure changes informed by the evaluation findings are having the intended effects) • Contains references and further information, and appendices with information about: SMART results, evaluation and data collection types, example frameworks and program evaluation plans, questions to ask before using secondary data, and an evaluation report checklist How can you use it? • Skip over the sections on their program evaluation unit and programs that are subject to a sunset clause as it is specific to their context (pages 3-4) • Use the 5 sections, one for each stage of the evaluation process, to understand the types of evaluations and how to plan and do • Use the glossary in Appendix A to understand evaluation terminology including “activities”, “outputs”, “out- comes” and “impacts” • Use the templates in Appendices B-G to inform your evaluation planning and data collection • Use the evaluation report checklist in Appendix G to ensure you have included all the important elements in your evaluation report
  13. 13. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 13 Evaluating uptake, implementation and impact Developing a Tool to Measure Knowledge Exchange Outcomes Organization: n/a (peer-reviewed publication) Author: Kelly Skinner, University of Waterloo Pages: 25 pages What is this about? • This peer-reviewed publication in The Canadian Journal of Program Evaluation describes the process of developing measures to assess knowledge exchange (or knowledge translation, knowledge mobilization, etc.) outcomes using the dissemination of a best practices document for diabetes as an example • Methods include a literature search about dissemination and adoption of health interventions and ap- proaches for measuring knowledge exchange (or KT, KMb, etc.) • The outcome of the project was the development of a tool that can be used to assess the outcomes of KE/ KT/KMb for best practices • Notes that absence of appropriate evaluation leads to lack of known best practices • Acknowledges that the terms ‘diffusion’, ‘dissemination’, ‘knowledge exchange’, ‘knowledge transfer’, ‘knowledge translation’, ‘knowledge utilization’, etc. are all used in the literature and have similar meanings • Acknowledges that a key factor for uptake and implementation is the involvement of stakeholders (end us- ers) throughout the research project and KT activities and encourages a “consumer mindset” by research- ers, meaning, that researchers should aim to develop solutions for the needs of consumers/end-users/ stakeholders in their research and KT activities How can you use it? • Gain an appreciation for the complexity of getting research used by learning about the different sub-stag- es of uptake and implementation (use of the knowledge or information) on p.58: from having access to information through to understanding it, being influenced to adopt and use the information, implementing the information into practice, and impacts observed as benefits to populations from using the information and on p. 61-62 which provides details about the levels of use and decision-points for uptake and imple- mentation • Use the figure on p. 59 which maps the information utilization scale with stages of concern and levels of use scales onto the evaluation utilization scale to inform your decision about which type(s) of evaluation you may wish to use to maximize the use of your evaluation results based on the characteristics of your intended target audience(s)
  14. 14. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 14 Overview: Data Collection and Analysis Methods in Impact Evaluation Organization: Unicef Author: Greet Peersman Pages: 21 What is this about? • This short guide begins with an overview of planning for data collection and analysis, starting with a “theo- ry of change” or a logic model that links inputs, activities, outputs, outcomes and impacts • The guide summarizes how to make the best use of your existing data by drawing on different type of indicators (input, output, outcome, impact) throughout the logic model for your research/KT project; how to ensure data collection and analysis methods are feasible; and tips on good data management towards validity, reliability, completeness, precision, integrity and timeliness How can you use it? • The tables throughout this short guide provide a high-level overview of concepts related to impact evalua- tion that appear in more detail in other resources • Use Table 1 on p. 3 to see examples of key evaluation questions and the method of data collection for each, this can help you decide which questions to ask and which data sources you will need to answer them • Use Table 2 on p.3-4 to learn 4 different data collection options, what is involved in using each method and examples for each • Use Table 3 on p.7 to inform your selection of methods for sampling to ensure quality in data collection • Use Table 4 on p.9 to understand the 4 purposes for mixing methods and combining data sources • Use Table 5 on p.9-10 to understand different options for numerical (quantitative) and textual (qualitative) analysis • Use Table 6 on p.10 to learn about 3 different data analysis approaches for causal attribution such as counterfactual approaches, consistency of evidence with causal relationship and ruling out alternatives • Learn about ethical issues and practical limitations in impact evaluation on p.11-12 • Use the example of good practices on p.12-14 to learn how to structure your evaluation • Learn about challenges and how to overcome them by reading the examples of challenges on p.14-15 • Learn about impact evaluation in more detail by reading the links to additional resources on p.16 • Understand impact evaluation terminology by reading the glossary on page i – iii (p.17-19)
  15. 15. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 15 Impact Evaluation in Practice, 2nd edition Organization: International Bank for Reconstruction and Development/The World Bank Authors: Paul J. Gertler, Sebastian Martinez, Patrick Premand, Laura B. Rawlings, Christel M.J. Vermeersch Pages: 367 What is this about? • This comprehensive, evidence-informed book begins with an overview of impact evaluation: what is it, why it’s important, and different complementary approaches such as cost-benefit analysis and cost-effec- tiveness analysis. • This book focuses in greater detail on the same aspects of evaluation that have been covered in the short- er guides that are part of this Evaluation Guide of Guides. Topics include: what is (impact) evaluation, why is it important, complementary approaches such as cost-effectiveness analysis and cost-benefit analysis, types of evaluation (formative and summative), formulating evaluation questions, ethical considerations, developing an evaluation plan including a ‘theory of change’ (logic model) and operationalizing it including data collection and indicator selection and construction (using the SMART structure) and finding sources of data • The Theory of Change/Logic Model example on p.35 is a depiction of the underlying structure of Phipps’ Co-Produced Pathway to Impact model • There is a section on causal inference and counterfactuals, randomized assignment (including a check- list), internal and external validity • Chapter 14 explains the realities of how to influence policy, such as answering questions that policymak- ers need the answers for (the importance of stakeholder engagement!) • There are references for additional resources/readings at the end of each chapter • There is a glossary of terms at the end of the book • It includes case studies/examples of impact stories throughout How can you use it? • Use the general logic model/theory of change diagram on p. 35 and the examples on p. 37, 39, 40 as a template for thinking about and planning your process and activities toward achieving impact with your research project and associated KT activities • This book is heavy on evaluation methodologies which, realistically, will likely be more overwhelming than useful for evaluating research and/or KT projects (e.g. you may wish to skip pages 47-200, especially if you are new to evaluation). These methods will require a full-time evaluation expert to employ which most research projects do not have resources to support. If you do have resources, these pages can be infor- mative if you are seeking detailed information about different methodologies. • Use the “General Outline of an Impact Evaluation Plan” on p. 207 to learn about what components are necessary for an impact evaluation plan, and determine whether you have adequate resources to do an impact evaluation • Learn about how to set up research partnerships/collaborations and plan for doing an impact evaluation on p.208-215 • Learn how to budget for an evaluation and about options for funding an evaluation p.216-229 • Use the checklist on p. 243 for doing “an ethical and credible impact evaluation” • Use the box on p.305 to help you design and format questionnaires • Use the “Checklist: Core Elements of a Well-Designed Impact Evaluation” on p.320 to make sure you have considered everything you need to for doing an impact evaluation • Use the “Checklist: Tips to Mitigate Common Risks in Conducting an Impact Evaluation” on p.320-321 • Use the glossary on p.325-335 to increase your understanding of evaluation-specific terminology
  16. 16. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 16 Creating indicators The Good Indicators Guide: Understanding How to Use And Choose Indi- cators (can download .pdf from this link) Organization: NHS Institute for Innovation and Improvement Author: David Pencheon Pages: 40 pages What is this about? • This is a brief but thorough introduction to indicators: what they are, why they are important, roles of mea- surement • It overviews the “Plan, Do, Study, Act” model for improvement • There are sections on: statistics and understanding variation; what really motivates people and organiza- tions to change; the importance of communication, timing, context and original purpose/goal; Frequently Asked Questions; criteria for good indicators and how to critically appraise indicators (with examples on p.26-27); myths about indicators • There is a glossary of common terms related to indicators How can you use it? • Use the “anatomy of an indicator” p.9 as a guide for constructing indicators for your research and associ- ated KT activities • Use the table on p.10 to think through 10 key questions about your indicators, which will help identify revisions that need to be made to your indicators, or even whether you should change the indicators com- pletely. This table helps you think through where the data will come from for the indicators you have cho- sen, and identify any potential problems so that you can anticipate and mitigate them early in the process • Use the glossary on p.30 to understand terminology related to indicators • Use the links to further reading on p.33-34 to learn about indicators in more detail • Use the example of the full anatomy of an indicator to define your indicators in detail
  17. 17. KT - KNOWLEDGE TRANSLATION - GUIDES Evaluation Guide of Guides This guide was developed by the Kids Brain Health Network (formerly NeuroDevNet) KT Core and York University Last updated July 2018 page 17 Evaluation Plan Workbook Organization: Point K Learning Center Author: not listed Pages: 25 pages What is this about? • This short guide explains what evaluation is, the history of evaluation, principles of evaluation and why you may choose to evaluate • Contains a section on evaluation planning for activities, outputs and outcomes • Explains indicators, elements of a strong indicator statement, differences between direct versus indirect indicators, and indicator examples • There is a section on data collection methods • There are appendices that contain 2 evaluation templates, one for “implementation” and one for “out- comes” How can you use it? • Use the information on indicators on p.11-12 to construct comprehensive indicators that are meaningful, direct, useful, and practical to collect (avoid proposing indicators without consideration for the availability of data you will need to be able to report on them) • Use the information on p. 13 to develop strong indicator statements that include: how much change you need to see; the target population you will measure; the condition, behaviour or characteristic you will measure; and the timeframe in which the change should occur. • Use the table of example indicators on p.14 to help you formulate outcome indicators • Use the examples of short-term, intermediate and long-term outcome indicators in the table on p. 16 to craft these types of indicators for your research, KT and evaluation outcomes • Learn the difference between direct and indirect indicators by reading the examples in the table on p. 17 Kids Brain Health Network KT Core KT helps to maximize the impact of research and training in neurodevelopmental disorders Contact the KT Core: LinkedIn: