© 2023 The World Bank Group 1818 H Street NW, Washington DC 20433 Telephone: 202-473-1000; Internet: www.worldbank.org and www.ifc.org This work is a product of the staff of The World Bank and the International Finance Corporation (the World Bank Group) with external contributions. The findings, interpretations, and conclusions expressed in this work do not necessarily reflect the views of The World Bank’s Board of Executive Directors, or the governments they represent. The World Bank does not guarantee the accuracy of the information included in this work. The boundaries, colors, denominations, and other information shown on any map in this work do not imply any judgment on the part of The World Bank concerning the legal status of any territory or the endorsement or acceptance of such boundaries. The material in this work is subject to copyright. Because the World Bank encourages dissemination of its knowledge, this work may be reproduced, in whole or in part, for noncommercial purposes as long as full attribution to the work is given. — Please cite the work as follows: “Vilar- Compte, M, Villar Uribe M. Implementation toolbox to document and analyze primary health care innovations” All queries on rights and licenses, including subsidiary rights, should be addressed to World Bank Publications, The World Bank Group, 1818 H Street NW, Washington, DC 20433, USA; fax: 202- 522-2625; e-mail: pubrights@worldbank.org The design of this toolbox benefited enormously from the collective work performed between the World Bank’s PHCPI team and Costa Rica’s Social Security Fund during the piloting stage. The authors would like to acknowledge the contribution of Pablo Gaitán- Rossi (World Bank, PHCPI Consultant) in the definition of the methodology, as well as the invaluable comments of the peer reviewers Kazumi Inden, Amber Hromi-Fiedler, Miriam Schneidman and Illona Varallyay. Diego Vapore was responsible for editorial and graphic design, which is a fundamental part of generating an amicable tool. Implementation science is a multidisciplinary field that focuses on studying and promoting effective strategies for translating evidence-based interventions (EBIs) into routine practice to improve outcomes in real-world settings. (1) It has been increasingly used to document and assess interventions, as it helps to make sense of how, when, where, and why research results and EBI are, or are not, being successfully used. (1) When compared to traditional project design, implementation, and management of health care interventions, implementation science can add value by addressing the specific complexities and challenges associated with implementing and scaling up these interventions. More specifically, it helps to analyze and understand the contextual factors, stakeholder dynamics, and system-level barriers that can hinder the successful adoption and integration of health care interventions. In this sense, by integrating implementation science principles the World Bank can tailor strategies, develop robust implementation plans, and leverage evidence-based practices to overcome implementation problems, which is particularly useful for primary health care interventions and other interventions targeted at improving health and nutrition outcomes at scale. It is important to consider implementation strategies that are responsive to context, as they help bridge the gap between EBI and implementation outcomes. (2) Tailoring implementation strategies to the specific context enables implementers and managers to address barriers and leverage facilitators, thereby increasing the likelihood of success. In the long run, if interventions cannot address scale-up by tackling the specifics of HOW and WHEN an EBI works, then the continued support and investment will be jeopardized (2) This toolbox leverages frameworks and practical strategies to incorporate implementation science into intervention design, management, and assessment. More specifically, the toolbox offers guidance to World Bank task team leaders (TTLs) and government counterparts to develop program impact pathways (PIPs), assess implementation fidelity, and map core elements for the sustainability and scale-up of interventions through a guide, videos, and templates that facilitate its application. It is designed through modules to ease the process. The toolbox offers practical recommendations that can be taken as given or adapted to the particular challenges faced in each context. The main goal is to help apply methods targeted at understanding what happens in the period between the design of an intervention and the traditional outcome evaluation. It makes it possible to map the implementation process and assess how implementation actually happens. With this process, TTLs and governments can correct implementation gaps that can negatively affect desired outcomes, can enhance participatory processes, and can reduce the timeframe between demonstration and scaling up by identifying the “active ingredients” of an intervention that make it effective (that is, the core elements). It is fundamental to understand that ignoring implementation gaps stems from a traditional focus on project design and outputs, rather than considering the challenges and gaps that can arise during implementation per se. Without recognizing and addressing such implementation gaps, practitioners miss the opportunity to identify and overcome barriers, leading to suboptimal project outcomes and limited impact on the ground. Therefore, expanding an understanding of implementation gaps is crucial for reorientating implementation and enhancing the effectiveness of interventions and innovations. Modules 1 and 2 define the PIPs and propose a practical way to develop the PIPs of interventions. This helps to map the process and the pathways of what is planned to be implemented. Modules 3 and 4 define and propose mechanisms to conduct an implementation fidelity assessment, which is an assessment of actual implementation (that is, real-world practice) as compared to the design initially proposed (that is, the plan depicted in the PIP). In this sense, a fidelity analysis makes it possible to identify implementation gaps and critical quality points. These sections of the toolbox also guide users in addressing the tension between fidelity and adaptation and in defining what elements of the intervention can or cannot be adapted to other contexts without voltage drops or drifts from its initial design. This requires identifying core elements of the intervention. Box 1: Definition of key implementation science terms relevant for the toolbox Adaptation: A process of thoughtful and deliberate alteration to the design or delivery of an intervention, with the goal of improving its fit or effectiveness in a given context. (3) Core elements: The “active ingredients” of an intervention that make it effective. Also referred to as core components or core functions. (1) Fidelity: The extent to which an EBI is delivered or executed as designed. Measures of fidelity may include: (a) adherence to the intervention, (b) dose of the intervention delivered, and (c) quality of intervention delivery. (1) Implementation science: Multidisciplinary field designed to generate evidence to explain and predict translation of EBI into practice settings to improve public health and yield effective methods uncovered through this process. (1) Implementation gaps: Evidence of failure or partial success in implementing interventions that have been shown to be cost-effective. Discrepancies or differences that occur between the intended or planned implementation of an intervention and the actual implementation that takes place in real-world settings. (4) Program drift: When the expected effect of an intervention is presumed to decrease over time as practitioners adapt delivery of the intervention. (5) Program impact pathway (PIP): A visual representation of the architecture of an intervention developed using information derived from an intimate knowledge of the program. (6) This program representation portrays a clear articulation of intervention activities, how they are implemented, and how they are expected to be linked with the immediate, intermediate, and final outcomes. (7) Program voltage drop: When the effect of an intervention is presumed to decrease as it moves from demonstration stages to scaled-up implementation. (5) Scale-up: The process by which interventions shown to be efficacious on a small scale and or under controlled conditions (that is, demonstration projects, pilots, etc.) are expanded under real-world conditions into broader policy or practice. (8) Stakeholders: Individuals who help inform contextual assessment of constructs. These may include individuals at different levels of organizations (that is, clinicians, administrators, leaders), community settings, and potential beneficiaries (that is, patients). They are individuals who influence or are influenced by the implementation of an intervention. (1) Sustainability: In its more basic definition, this refers to the continued use of an intervention in practice. However, it can also imply (i) whether the core elements are maintained, (ii) the extent to which desired health benefits are maintained and improved over time, or (iii) whether the intervention continues to function at the required level to maintain the desired benefits. (9) The toolbox emphasizes the linking the implementation frameworks to the project cycle (see Figure 1) used by the World Bank to design, prepare, implement, and supervise projects. Modules 1 and 3 offer a particular reflection on the stages of the project cycle that would benefit from the use of a PIP and/or an implementation fidelity analysis. Figure 1. World Bank Project Cycle Source : https://projects.worldbank.org/en/projects-operations/products-and-services/brief/projectcycle It is important to highlight that the initial piloting of this toolbox was conducted in Costa Rica to assess the implementation of an integrated care network demonstration project (see Box 1). This experience is used extensively throughout the toolbox to provide practical examples not only for educational purposes, but also to highlight the feasibility of using such tools. Box 2: Costa Rica’s experience in using the implementation science toolbox Context: Integrated health service delivery models are currently being implemented in countries around the globe that aim to leverage improvements to their primary health care system to increase accessibility, timeliness, coordination, quality, and efficiency in the delivery of care to their populations. Costa Rica has made significant strides in the implementation of an integrated health service delivery network model through a demonstration project implemented by Costa Rica’s Social Security Fund. This project has sought to improve care coordination, accessibility, and timeliness of care for the population of the Huetar-Atlántica region through the creation of a governance structure and changes to the care models and pathways. More specifically, the demonstration project was structured around the components that were defined by stakeholders during the design of the demonstration intervention: 1. Needs assessment in the context of service delivery network. 2. Establishment, organization, and functionality of the health network governance. 3. Organization of type 2 diabetes (T2D) integrated management services. Implementation science research tools were used to document the activities undertaken by this major reform (that is, PIPs), understand the implementation fidelity, and identify key processes necessary for the sustainability and scale-up of the innovations to the country’s other regions. The toolbox was followed to develop the PIPs, assess implementation fidelity indicators, and identify core element for each component. The first step was to define six PIPs that visually describe the processes that must be followed to establish a network delivery service system. Two of the PIPs are tied directly to components of the demonstrative project (needs assessment in the context of service delivery network and organization of T2D integrated management services). However, the component linked to the establishment, organization, and functionality of the health network governance was broken down into four PIPs: one focused on governance and management mechanisms and three additional ones described the clinical delivery of interventions promoting service integration, effective use of resources, and timely access to services; these clinical delivery interventions include the integration of ambulatory surgery services, outpatient management, and remote specialized care. Each of these has particular purposes and processes, but as a whole they contribute to a new way of organizing networked service delivery. From an implementation fidelity perspective, the assessment revealed that while the network delivery model proved to be a complex system, it has been acceptably attached to the PIPs. Based on the prior analyses core elements for each of the six PIPs were mapped. The use of the toolbox led to a productive collaboration between the World Bank and Costa Rica’s Social Security Fund. The national stakeholders highlighted that this methodology is pragmatic and led to useful outcomes that are helping them scaling the project to other regions in the country. This highlights the feasibility of the toolbox and the importance of adding implementation science tools to the World Bank’s project cycle. Note: for more detailed information about the PIPs, implementation fidelity, and core elements of the demonstration project in Costa Rica, please refer to the following documents: Implementing Integrated Health Service Networks in the Huetar-Atlántica Region of Costa Rica: An Assessment of the Process Authors: Martinez, Luis Carlos Vega; Vilar-Compte, Mireya; SCAN or Gaitan Rossi, Pablo; Villar Uribe, Manuela CLICK HERE TO READ Process Evaluation of the Implementation of Integrated networks for the provision of Health Services Authors: Vilar-Compte, Mireya; Gaitán-Rossi, Pablo; SCAN or Velázquez, Natalia Rovelo; Bernal, Óscar; Villar Uribe, Manuela CLICK HERE TO READ Fidelity and Sustainability in the Implementation of the Integrated Networks for the Provision of Health Services of the Huetar-Atlántica Region, Costa Rica SCAN or Authors: Vilar-Compte, Mireya; Gaitán-Rossi, Pablo; CLICK HERE Velázquez, Natalia Rovelo; Bernal, Óscar; Villar Uribe, Manuela TO READ Analysis of Primary Health Care System Capacity in the Huetar Atlántica Region of Costa Rica Authors: Eesha Desai, MS, Joseph Ross, MPA, Natalia Rovelo, SCAN or Oscar Bernal, MD, PhD, Jess Wiken, Zeina Siam, PhD, MS, Dan Schwarz, MD, MPH, Manuela Villar Uribe, PhD, MPH, with CLICK HERE technical contributions and review by the Program for TO READ Strengthening the Provision of Health Services (PFPSS) of Costa Rica Lastly, it is likely that further iterations in the use of this toolbox will lead to further adaptations and more examples of its use that will enrich our learning process. SCAN or CLICK HERE TO WATCH 1. Weiner BJ, Lewis CC, Sherr K. Practical implementation science: moving evidence into action: Springer Publishing Company; 2022. 2. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implementation science. 2013;8(1):1-11. 3. Wiltsey Stirman S, Gamarra JM, Bartlett BA, Calloway A, Gutner CA. Empirical examinations of modifications and adaptations to evidence‐based psychotherapies: Methodologies, impact, and future directions. Clinical Psychology: Science and Practice. 2017;24(4):396. 4. Villalobos Dintrans P, Bossert TJ, Sherry J, Kruk ME. A synthesis of implementation science frameworks and application to global health gaps. Global health research and policy. 2019;4(1):1-11. 5. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implementation Science. 2013;8(1):117. 6. Pérez-Escamilla R, Segura-Pérez S, Damio G. Applying the Program Impact Pathways (PIP) evaluation framework to school-based healthy lifestyles programs: Workshop Evaluation Manual. Food and nutrition bulletin. 2014;35(3_suppl2):S97- S107. 7. Avula R, Menon P, Saha KK, Bhuiyan MI, Chowdhury AS, Siraj S, et al. A Program Impact Pathway Analysis Identifies Critical Steps in the Implementation and Utilization of a Behavior Change Communication Intervention Promoting Infant and Child Feeding Practices in Bangladesh. The Journal of Nutrition. 2013;143(12):2029-37. 8. Milat AJ, Bauman A, Redman S. Narrative review of models and success factors for scaling up public health interventions. Implementation Science. 2015;10(1):1-11. 9. Cook WSSKJ. N Calloway A Castro F, Charns M The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17. Available materials linked to Module 1: • Introductory document • Video tutorial • Quiz for self-assessment (definition and usefulness of PIPs) The importance of innovating in primary health care has increased in recent decades due to challenges such as increased life expectancy, a high burden of chronic conditions, and rising costs. Innovations are strongly influenced by national health systems, which encompass a wide variety of actors and processes. Despite the relevance of the topic, innovations are not usually documented or assessed in terms of their design, implementation, and management; (10) this limits the possibilities of sustainability and scale- up of innovations. In addition, the lack of rigorous documentation and monitoring of primary health care innovations limits the evidence of their alignment with the underlying causes of the problems they seek to address and the design of the intervention to achieve the desired changes. One of the biggest challenges for documenting primary health care innovations is to understand its complexities and to describe whether the designed innovations are really delivering what it takes to achieve the expected changes in health care provision. These challenges can be addressed through the use of implementation science tools such as the Program Impact Pathway (PIP) framework. While the motivation of the toolbox emerges from the gaps seen in primary health care innovations, the tools presented in this module and the subsequent ones can be used for interventions in other policy arenas. While the PIP framework was originally developed to help understand what it would take for mother-child nutrition programs to reduce child mortality in low- and middle-income countries, (11) it is now recognized as a methodology to assess the implementation of different types of interventions. PIPs provide a logical sequence of steps—from inputs and activities to outputs and outcomes—and ultimately highlight the causal pathways and mechanisms through which an intervention brings about change. This understanding enables stakeholders to pinpoint implementation gaps and make corrections to ensure effective scaling of the intervention. (12) For example, a PIP may reveal that a lack of community engagement is hindering implementation, prompting stakeholders to prioritize community involvement strategies to address the gap and improve the intervention's effectiveness. The PIP methodology leads to diagrams that represent a program’s architecture and are developed through iterative processes with stakeholders, which can include program administrators, program managers, front-line workers, and program clients. (13) PIP and the traditional logic frameworks used at the World Bank, such as the Results Framework, differ in their approach and level of detail. Table 1 summarizes the differences between both approaches and highlights the added value of PIP in project design and management. Table 1. Comparison between PIPs and traditional logic frameworks Comparative PIPs Traditional logic frameworks dimension Complexity - Recognizes and captures the complex - Often present a linear and simplified and dynamics and dynamic nature of project view of cause-and-effect implementation. relationships. - Acknowledges the multiple pathways - May overlook the intricacies and through which interventions lead to interdependencies of the desired outcomes. implementation process. - Considers contextual factors, feedback loops, and intermediate steps. Flexibility and - Recognizes that interventions may - Tend to be more rigid and static. adaptability need to be modified or adjusted based - Limit the ability to respond and adapt on emerging evidence. to new insights or challenges. - Enable ongoing learning, course correction, and adaptation. Stakeholder - Emphasize stakeholder engagement - May not explicitly highlight the need engagement and collaboration throughout the for ongoing stakeholder engagement project’s life cycle. and collaboration. - Encourage participatory decision- making and draw on stakeholders' expertise and perspectives. Contextual - Explicitly consider contextual factors - May not provide a detailed analysis of factors that influence the success of contextual factors. interventions (that is, context-specific - May potentially overlook critical drivers and barriers). factors that impact outcomes. Iterative - Promote ongoing monitoring and - May not explicitly facilitate iterative learning feedback mechanisms to inform learning and continuous decision-making. improvement. - Incorporate learning from implementation experiences. PIPs are a useful tool to document intervention implementation and to realistically identify a program’s potential impact and the key factors that need to be monitored for sustainability of the innovation. Understanding and documenting how interventions work to achieve their outcomes and identifying factors that contribute to the observed success or failure in implementation are critical for replication and utilization. In this context, mapping pathways of how interventions are implemented helps to: (14) 1) Contextually ground the interpretation of results. 2) Differentiate poor design from poor implementation. 3) Identify factors that can influence the utilization of the interventions. A PIP analysis helps to answer not only whether the impact was achieved but also how and why it was achieved (or not), accounting for contextual factors that influence the intervention implementation. (15) For implementers and funders, a PIP analysis can be crucial to identify specific actions that need to be corrected and/or complementary activities that might be essential for sustainability or scale-up. In this sense the PIP can be used for monitoring and strengthening intervention delivery, facilitating course correction at various stages of implementation, and understanding the mediating and modifying determinants of intervention impact. (16) In summary, compared to the traditional project design, implementation, and management approaches at the World Bank, PIPs can provide a more comprehensive and dynamic understanding of the causal links between interventions, outcomes, and impacts. PIPs provide: 1. Holistic understanding of the complex and interdependent factors that influence the success of interventions. 2. Adaptive management, which recognizes that interventions often need to adapt and respond to changing circumstances, emerging challenges, and new evidence. 3. Engagement and collaboration by involving relevant actors at different stages of the impact pathway. PIPs can enhance the World Bank technical team and country partners’ ability to design, implement, and manage health care interventions effectively, leading to better outcomes, increased impact, and improved health care service delivery. Mapping the pathways of how interventions are designed, implemented, and utilized can enable contextually grounded interpretations of outcomes and help identify factors and bottlenecks that might hinder their impact and thus need to be addressed. Accordingly, a PIP analysis helps to identify pathways and quality critical control points (CCPs). CCPs are crucial for understanding the set of indicators that need to be monitored to maximize the impact of a given intervention.(15) When considering the World Bank project cycle (Figure 1), PIPs can be useful at three different points. 1. PIPs can be particularly useful at the project preparation stage (1 to 4), in which implementation and project management arrangements are determined (Figure 1, red stars). 2. PIPs can also be useful when a project has already commenced implementation (stage 5) but it has not been thoroughly documented (Figure 1, blue star). 3. PIPs can also be valuable when a task team wants to introduce innovations during project implementation (stage 5 and 6), potentially as a result of an evaluation at the point of implementation completion (Figure 1, white stars). Figure 1. World Bank Project Cycle An example of the usefulness of PIPs in documenting and monitoring innovative practices in primary health care is the implementation evaluation of Costa Rica’s health network delivery service demonstration project in the Huetar-Atlántica Region (see Box 1). Together with local stakeholders (providers and managers involved in the actual implementation of the network and service delivery in the region) and a Costa Rica’s Social Security Fund technical team, a World Bank team mapped the PIPs. Six PIPs were developed to visually summarize the processes that must be followed to establish a network delivery service system: 1. Needs assessment in the context of service delivery network. 2. Governance and management mechanisms. 3. Clinical delivery of interventions promoting service integration, effective use of resources, and timely access to services, comprising three PIPs: i. Integration of ambulatory surgery services. ii. Outpatient management (day hospital). iii. Remote specialized care. 4. Organization of T2D integrated management services. This is an example of a PIP that was developed when implementation (stage 5) had already commenced, but the implementation processes had not been documented. This was particularly relevant as the network and service delivery designed and implemented was a demonstration project seeking to inform the future scale-up to the other regions of the country. The PIPs allowed policymakers from the Costa Rica’s Social Security Fund to assess the fidelity of the intervention and to identify the CCPs for monitoring and sustainability. PIP of integrated service delivery for people in the network living with T2D SCAN or CLICK HERE TO WATCH 10. Proksch D, Busch-Casler J, Haberstroh MM, Pinkwart A. National health innovation systems: Clustering the OECD countries by innovative output in healthcare using a multi indicator approach. Research Policy. 2019;48(1):169-79. 11. Kim SS, Habicht J-P, Menon P, Stoltzfus RJ. How do programs work to improve child nutrition. Program impact pathways of three nongovernmental organization intervention projects in the Peruvian highlands Washington DC: IFPRI. 2011. 12. Vossenaar M, Tumilowicz A, D'Agostino A, Bonvecchio A, Grajeda R, Imanalieva C, et al. Experiences and lessons learned for programme improvement of micronutrient powders interventions. Maternal & Child Nutrition. 2017;13:e12496. 13. Pérez-Escamilla R, Segura-Pérez S, Damio G. Applying the Program Impact Pathways (PIP) evaluation framework to school-based healthy lifestyles programs: Workshop Evaluation Manual. Food and nutrition bulletin. 2014;35(3_suppl2):S97- S107. 14. Avula R, Menon P, Saha KK, Bhuiyan MI, Chowdhury AS, Siraj S, et al. A Program Impact Pathway Analysis Identifies Critical Steps in the Implementation and Utilization of a Behavior Change Communication Intervention Promoting Infant and Child Feeding Practices in Bangladesh. The Journal of Nutrition. 2013;143(12):2029-37. 15. Buccini G, Harding KL, Hromi‐Fiedler A, Pérez‐Escamilla R. How does “Becoming Breastfeeding Friendly” work? A programme impact pathways analysis. Maternal & child nutrition. 2019;15(3):e12766. 16. Mbuya MN, Jones AD, Ntozini R, Humphrey JH, Moulton LH, Stoltzfus RJ, et al. Theory-driven process evaluation of the SHINE trial using a program impact pathway approach. Clinical Infectious Diseases. 2015;61(suppl_7):S752-S8. Available materials linked to Module 2: • Guidance on the steps • Video tutorial • Checklist & working templates o Appendix 1: PPT template for working sessions with stakeholders to provide an understanding about the PIP methodology, trust in the working team, and ownership of the PIP o Appendix 2: Template for collecting feedback in the first round of PIP review o Appendix 3: PPT template for the review workshop session o Appendix 4: Aids for preparing the PIP report • Quiz for self-assessment (step-by-step) PIPs are a useful tool to document the implementation of interventions. They help describe the steps through which the intervention is expected to achieve impact. PIPs identify the processes that must be in place for the program to achieve its main or long-term outcome. This means showing the links between the sequence of steps in getting from activities to impacts and describing the casual assumptions behind such links. The aim of this document is to provide guidance on how PIPs can be conducted to document new or already-implemented interventions. Such documentation should be seen as a basis for scale-up, monitoring, and evaluation. It is therefore useful for implementers/managers, funders, and specialists in primary health care. There are no universally accepted ways of developing a PIP. For the purposes of the current guide, four general steps are proposed (see Figure 1). It is important to highlight the cyclical nature of these steps, as several iterations of the steps are likely to happen. Figure 1. Steps to developing a PIP Definition Information of working gathering team PIP definition, Synthesis of iteration information and consensus Even though a PIP should result from consensus, a team needs to coordinate the work linked to information gathering, analysis, and synthesis and to lead the initial drafting of the PIP. A key question is who should be part of the leading team. Answering this question is highly contextual, but some of the characteristics that can be considered are: (i) knowledge about monitoring and evaluation, systems dynamics, and the innovation or intervention itself; and (ii) competencies regarding group facilitation and stakeholder dialogue. The leading team may be made up of internal or external members (that is, consultants), and it can also have a hybrid structure (that is, internal leader and an external team or group). Regardless of its composition, the team needs to comprise individuals free of conflicts of interest and credible to the various parties involved. In addition, the group should be able to respond to the needs of the program managers, policymakers, and/or funders and should have the ability to communicate to stakeholders, broadly defined (that is, politicians, technical personnel, beneficiaries, program implementers, etc.). The team should be small enough to operate effectively, but of an adequate size to respond to the needs and timeline of the intervention being documented and assessed. It is suggested that the team should be comprised of three to six members with diverse areas of expertise and seniority. Box 1: Who led the PIP of Costa Rica’s health network delivery service demonstration project in the Huetar-Atlántica Region? The implementers of the health network delivery service demonstration project in the Huetar-Atlántica Region in Costa Rica had access to technical support from the World Bank. The World Bank hired two public health researchers with broad experience in evaluation and implementation science. The researchers had worked extensively in the Latin American and Caribbean region and were able to communicate with stakeholders, review documents, and generate all the reports in Spanish. In addition, they were committed to serving as liaison between the World Bank and the implementers and viewed the implementers as their primary client. The team also included one project management assistant who provided logistics and technical support. From the implementers’ perspective, in this case, having an external technical team helped provide credibility to the process. A PIP of a primary health care intervention seeks to describe the steps through which it is expected to achieve a given outcome. As such, information is needed to understand issues such as: • Program goals • Types of services provided • Target audience(s) • Quality of care indicators • Setting(s) of the innovation • Level of utilization of services or intervention • Level of coverage • Human or physical resources • Specific activities, operations, components This type of information is usually available from three general types of sources: (i) documents relating to the intervention, (ii) health-related data sources, and (iii) interviews with key stakeholders. Table 1 describes in further detail each of these information sources and provides examples. Table 1. Information sources to develop the PIP of a primary health care innovation or intervention Information Description Example PIP of Costa Rica’s pilot source network model Documents - Generally produced by governmental - A preliminary review of documents agencies and/or not-for-profit provided by the World Bank was organizations designing and/or conducted, which contributed to implementing the intervention. the understanding of the network - Pre-intervention documents tend to model and its three components justify the relevance and describe (needs, governance, and type 2 budgetary issues, the institutions diabetes management). involved, legislation, etc. Tend to be - A documentary review was generic and describe the intended conducted for each of the implementation. components. - Documents prepared once the - The documentary review mostly intervention has already started tend included government documents to be reports about activities, specifying institutional and legal budgetary issues, etc. frameworks, guidelines and - Quality, relevance, and extension of methodological aspects for these documents tend to be highly implementing activities, strategic uneven. plans, background information about the region and the health system, etc. Data sources - Secondary data (already available) - No specific data sources were including surveys or administrative used. Data was available from data looking at capacity, usage, reports. training, promotion, coverage, satisfaction, etc. - Mainly quantitative data that has already been collected and informs the different steps of the intervention. - Common examples: quality or satisfaction surveys, data emerging from clinical files, primary health care infrastructure national data, budgetary data on performance monitoring indicators, etc. Interviews - Gathering information from - No formal interviews were stakeholders is fundamental for conducted at this stage of the PIP gaining understanding about the development. However, during two intervention. working sessions with the - Stakeholders can be those designing, designers and managers at the managing, implementing, financing, or central government level, relevant monitoring the intervention, as well as information was obtained beneficiaries. regarding the goals of the - Narratives can be collected through innovation and how it related to semi-structured interviews, focus the already operating health groups, or working sessions. system. - Follow-up calls and emails were exchanged for further clarification about specific inputs and activities of the pilot network’s various components. The PIP working team will be responsible for gathering all this information. Obtaining all these documents and gaining access to stakeholders often requires building relationships with designers, implementers, managers, and/or funders. From a policymaking perspective, these actors need to trust the PIP team, to understand the usefulness, and to build ownership around the PIP itself. Generally, the PIP working team will need to have some working sessions with key stakeholders to build this relationship. Appendix 1 contains some PPT templates that can aid in structuring these sessions. The number of working sessions will depend on the context. Box 2. How did the PIP working team foster understanding, trust, and ownership regarding the methodology among Costa Rica’s designers and managers of the demonstration network model? The PIP working team was established in the context of an already established relationship between the World Bank and the Costa Rica’s Social Security Fund. While there was good rapport, they had not been able to agree on the terms of a qualitative evaluation of the demonstration network model. The PIP team thus needed to respond to this challenge by proposing a clear and feasible project that could respond to the Costa Rica’s Social Security Fund’s need for a qualitative assessment, while also providing room for an evaluation that would allow the World Bank to adapt and document the network model as a primary health care innovation that could be fully or partially replicated in other countries. Through three working sessions—with the World Bank and the Costa Rica’s Social Security Fund —a four-stage evaluation was agreed, in which the PIP was a structural initial piece. This required transferring knowledge to the technical team from the Costa Rica’s Social Security Fund about the usefulness and relevance of the PIP as a tool, setting clear expectations about how it would be developed, bringing them in as key actors of the project, and making sure work was performed as a collective technical group. During the working sessions the PIP team used slides such as those presented in Appendix 1. Due to the COVID-19 pandemic, all the working sessions were conducted remotely (through Zoom or Teams). It was very helpful to have short presentations, with a very clear agenda, while leaving room at different stages for input and discussion. The information collected needs to be synthetized to help build a description of what the intervention is in fact doing. The PIP working team needs to carefully review and discuss the available information, which can be organized according to the following categories: Table 2. Categories to Synthesize Relevant Information per the PIP (17) Category Definition Example Inputs - Resources needed to achieve the - Health providers, equipment or proposed innovation or intervention medical supplies, and physical (things that will be used, such as infrastructure (including issues providers, equipment, and like electricity, bandwidth, water, infrastructure). etc.). Beneficiaries - Groups or subgroups that will benefit - Beneficiaries of a given from the intervention and whose geographical area or ethnic group wellbeing should be improved. targeted by the intervention. Activities - Actions undertaken by those - Operating electronic files, waiting implementing the intervention (things lists, screening activities, etc. that will be done). Activities and operations that actually need to be implemented. Outputs or - Short-term goals, relating to goods - Services provided, published products and services directly resulting from the reports, strategies to improve implementation of the activities quality of care, etc. undertaken by the primary health care innovation or intervention. Target - Specific actors that will be using the - Health providers who use population outputs or products. Commonly it is products such as a report. the same as the target population, but Community health boards that it can also include other users who are intermediaries and not the use products such as a beneficiaries per se, but who need to management plan. use the products to generate the overall outcomes and impacts. Outcomes - Changes in capacities and behaviors, - Changes in knowledge, skills, and as well as direct benefits of the attitudes of those who have used intervention. Often referred to as or received the outputs or medium-term goals. products. - Changes in actual practices (behaviors) in the target population as a result of receiving or using the innovation or intervention. Impacts - Changes in the health of people (or - Reduction in mortality and other wellbeing indicators), long-term morbidity, improvement in goals of the intervention. cognitive development, etc. To synthesize the information into these categories it is highly recommended to use software such as Miro or Mural, which are digital interactive whiteboards that place the synthesized information into colored boxes (similar to post-its or sticky notes) that can be easily moved around and reorganized. Boxes of certain colors can be used for each of the aforementioned categories, and if modifications are further discussed it is easy to perform changes. These tools will enable an easy transition toward the actual PIP diagram. To perform this step, the PIP team should work collectively to discuss and reach consensus as to what the key information is and how should it be categorized. The information needs to be synthesized to provide as much specificity as possible about the process, while understanding that this will need to be summarized in a diagram. Box 3. Examples of information synthesis for Costa Rica’s PIP pilot network model in the Huetar-Atlántica Region As mentioned previously, the pilot network model had three components. The examples portrayed here correspond to the components about Type 2 Diabetes (T2D). o One key input was the medical electronic file (EDUS). o One of the fundamental target populations were people in the Huetar- Atlántica Region with T2D. o Among the many activities, one example was conducting T2D self- management group sessions. o An example of a product was the list of people older than 20 who had been screened and referred to clinical services. o In addition to the people with T2D, the target population also contained other users such as health care providers. o In terms of outcomes, one example was empowering people to manage their condition through self-management. o One of the key impacts was to reduce the morbidity and mortality T2D in the region. After synthesizing the information into categories, the next step is to place arrows between them. The arrows establish the expected causal pathways and often carry important assumptions about the functioning of the intervention. To do so, the PIP working team can prepare an initial draft that can then be reviewed through iterations of revision and discussion with relevant stakeholders. There are different ways to start this process, although two are recommended: • Starting from the impact and moving backward to the activities and inputs; or • Starting with a very basic representation mapping activities and outcomes and impacts, and then identifying the inputs and processes that must take place from the innovation or intervention activities to achieve the medium- and long-term goals. Box 4. Example of the development of Costa Rica’s PIP pilot network in the Huetar-Atlántica Region The PIP of the T2D management component of the pilot network in Costa Rica had different outcomes and activities; one is tracked here to exemplify the process. The process initially started by tracking certain self-care management activities and outcomes: Activities Behavior changes Self-management Empowering people T2D group sessions with T2D Better control of Activities to prevent T2D and less T2D complications complications Conducting group activities to promote T2D self-management as well as activities to prevent complications should lead to better control of T2D and fewer complications while empowering people to manage their chronic condition. This was an initial diagram linking activities and outcomes. The next step was to identify the inputs and processes behind this simple link: Assumptions Activities Products Behavior changes Impacts Trained T2D self- Trained Empowering providers on management patients on people with TD2 education group sessions T2D self-care T2D Reduction in s T2D mortality & morbidity Guidelines on Activities to Early Better control T2M self- prevent T2D counselling on of T2D & fewer management complications T2D complications This richer version of the process made it possible to identify inputs, products, and outputs, as well as long-term goals. Arrows linking the boxes also became more complex. The full version of the PIP can be found at the end of the module. The arrows of a PIP are particularly important as they contain information about assumptions or things that need to happen to achieve the given step. For example, based on the case presented in Box 4 the question linking the outcomes and the impact would be “what outcomes will it take to achieve a reduction in T2D mortality and morbidity?” Based on the partial PIP presented to achieve this impact, we need to empower people with T2D on how to control and better manage their condition, which will help prevent complications. This becomes very helpful in identifying the roadblocks of the implementation process and the Critical Control Points (CCPs) for understanding the set of indicators that need to be monitored to maximize the impact of a given intervention. (18) Identifying these elements is important for monitoring and quality improvement. For example, in Box 4 a CCP could be an indicator relating to training patients on T2D self-care. The initial draft of the PIP will need to be refined in collaboration with stakeholders until there is consensus that the model reflects the program accurately. A key question is which stakeholders need to be involved, and while there is no single answer, stakeholders involved in interventions often fall into five categories: designers, managers, implementers, funders, and beneficiaries. Two important considerations in this connection are: first, that “implementers” include different types of health providers or personnel who are actually carrying out the intervention; and second, when gathering information through interviews or surveys an informed consent should generally be obtained. In addition, while interviewing designers, managers, implementers, and funders is often exempted from Institutional Review Boards (IRBs), adding beneficiaries commonly requires IRB approval. There are different ways to secure the collaboration of stakeholders. This guide proposes one based on a remote/online version with three stages. However, this can be adapted to an in-person version with a different number of iterations. Decisions should be taken based on the context and with cultural adaptations. Collaborative review of PIP stage 1: initial review and follow-up interviews Once the PIP working team has put together an initial draft of the diagram, it will be sent for review to select stakeholders together with a video briefly explaining what a PIP is, the importance of their feedback, and specific instructions on how to provide feedback. At least one week should be provided for feedback, which would be sent via email to the PIP working team. Templates, such as those suggested in Appendix 2, can be used for gathering the feedback from stakeholders. If needed, a brief meeting can be set up for follow-up and clarification questions. After getting the feedback from the different stakeholders, the PIP working team will revise the diagram and identify any aspects that still require further inquiry. For these aspects they will conduct interviews (via Zoom or a similar virtual platform or in person) with key informants (who can be from the same pool of the initial review or additional stakeholders). Based on the additional information provided, the PIP working team will finalize a new version of the diagram. Collaborative review of the PIP stage 2: review workshop to build consensus This part of the process is fundamental because it is targeted at building consensus as to whether the diagram is an adequate portrayal of the intervention or, if required, as to what needs to be modified. This is achieved through a virtual workshop lasting two to three hours. The workshop will be facilitated by the PIP working team and will bring together about 10 stakeholders selected from those who had previously participated in stage 1. The workshop will be organized as follow: • The new version of the PIP diagram will be sent to participating stakeholders for review a few days before the workshop. • During the workshop the PIP will be reviewed by category (that is, inputs, activities, etc.). To do so the PIP working team will conceptually present the definition of each category and then request stakeholders to discuss if that category is correctly portrayed in the diagram (see Box 5 for an example). • For the discussion, the participants will be placed in breakout rooms (this is an embedded function of Zoom) and given a short time (approximately 10 minutes) to discuss if a given category of the PIP (for example, activities) correctly summarizes how the intervention is working (or is intended to work, this would depend on the stage of the project cycle in which the PIP is being developed). • After this short time all the breakout rooms will return to the general session and a poll will be held (using the embedded function in Zoom) asking stakeholders if they agree or disagree with how the category is synthesized in the PIP diagram. Each stakeholder will answer the poll individually. The results will be presented, and if full agreement is reached, the session would move to the following category and repeat the same process. If the results show any level of disagreement, using the chat function the stakeholders will be asked to explain why and suggest modifications. The PIP team will need to make sure all the information of the chat is saved. • The workshop should be finalized by a debriefing activity asking participants to share any thoughts about the methodology of the workshop. • If the stakeholders agree, the workshop can be recorded. Nonetheless, it is important to stress that the small group discussion and poll results cannot be recorded and, therefore, separate notes should be kept. Appendix 3 provides some generic slides that can help organize the workshop. This methodology can be easily adapted to a face-to-face version, but more time should be allocated. Based on the outcomes of the workshop, the PIP working team will make the modifications suggested and generate a new version of the diagram. Collaborative review of the PIP stage 3: generation and revision of the PIP report The next step is for the PIP working team to generate a report of the PIP, the aim of which is to lay out the implementation processes of the intervention. The report should be brief but include the following sections: • A brief introduction of the health system where the intervention is taking place, and an explanation of what it intends to achieve. • A short description of what a PIP is and why is it important. • A methodological note explaining how the PIP was developed. • Presentation of the PIP diagram with a brief narrative. • Acknowledgment of the contribution of the key stakeholders involved. Appendix 4 offers some aids for developing the PIP report. Once this report is completed it will need to be sent to key stakeholders, who can be a subset of those who participated in stage 1 and stage 2, for final validation. The report should be sent electronically (that is, in Word format), using the comments or track changes function for feedback within a pre-specified time period (7–10 days). After receiving such feedback, the PIP team will generate a final version, unless there were many changes that might warrant an additional iteration of stage 3. The final version can either be presented at a final meeting or be distributed among key stakeholders. Box 5. Participants in the iterative and participatory process of drafting the PIPs of Costa Rica’s PIP health network service delivery demonstration project in the Huetar-Atlántica Region The PIPs of Costa Rica’s health network delivery demonstration project were drafted when it was already being implemented. The Costa Rica’s Social Security Fund was a key actor through the Provision of Health Services Strengthening Program Area, which has a technical team that had contributed to the demonstration project’s design and monitoring. Another fundamental group was the regional stakeholders in charge of overseeing and implementing the demonstration project. Several actors from both clusters of stakeholders participated in all the iterative process highlighted previously in the toolbox. Together, they brought key information about the necessary assumptions, inputs, outputs, etc. depicted on the PIPs from both a central and regional perspective. They were highly involved and knowledgeable about the intended design and contextual adaptations, which proved invaluable to drawing the pathways. A rather stable group of actors participated throughout the different iterations and the revision and consensus- building process. No beneficiaries or community organizations were part of the process. This responded to three different issues: o There was limited time to draft the PIPs, as it was an essential piece for further evaluative stages. o The PIP workshops and interactions were fully virtual, since they took place during the COVID-19 pandemic. This limited the number of participants that were feasibly able to participate. o While there is always room for engagement with beneficiaries and community organizations, the PIPs focused on institutional reforms targeted at integrating services at different levels of the health system; hence, actors such as reginal managers and implementers and central-level actors from the Costa Rica’s Social Security Fund were key stakeholders. 17. Mayne J. Useful theory of change models. Canadian Journal of Program Evaluation. 2015;30(2). 18. Buccini G, Harding KL, Hromi‐Fiedler A, Pérez‐Escamilla R. How does “Becoming Breastfeeding Friendly” work? A programme impact pathways analysis. Maternal & child nutrition. 2019;15(3):e12766. SCAN or SCAN or SCAN or SCAN or SCAN or SCAN or CLICK HERE CLICK HERE CLICK HERE CLICK HERE CLICK HERE CLICK HERE TO WATCH TO WATCH TO WATCH TO WATCH TO WATCH TO WATCH SCAN or CLICK HERE TO ACCESS THIS TEMPLATE The purpose of this format is to gather the feedback of the first draft of the PIP. When providing feedback please keep in mind that a PIP helps to describe the steps through which the primary health care innovation or intervention is expected to achieve its impact. The PIP identifies the processes that must be in place for the intervention to achieve its big or long-term outcome. This implies showing the linkages between the sequence of steps in getting from activities to impacts and describing the casual assumptions behind such links. Based on the initial diagram of the PIP please provide feedback on the following aspects: 1. Are there any missing boxes for any of the following aspects? And if so please specify which box is missing and how would it connect to other elements of the PIP. 2. Does any box have errors (i.e. names, responsibilities, wording, place or connections)? Missing boxes Errors in boxes (please specify) (please specify) Inputs Population Activities Outputs or products Goal population (users) Types of services provided Outcomes Impacts 3. Are there any missing arrows connecting boxes? If so, clearly specify which boxes would the missing arrow be connecting. 4. Are there any errors in the specified arrows (i.e. arrows that link boxes which are not connected)? Please specify the boxes connected by the mistaken arrow. Missing arrow Errors in arrows (please specify boxes (please specify boxes that that should be connected) should not be connected) 5. Provide any further feedback that could be helpful in improving the PIP (if needed you could include the diagram with comments) SCAN or CLICK HERE TO ACCESS THIS TEMPLATE SCAN or CLICK HERE TO ACCESS THIS TEMPLATE 1. Briefly summarize where and why the primary health care innovation or intervention is being implemented. Also explain who is implementing the innovation or intervention. 2. Explain why was a PIP developed (objectives of using such methodology for the specific innovation or intervention) 3. Briefly explain what is a PIP as an evaluation tool. 4. Highlight that the World Bank Toolbox methodology was used to elaborate the PIP including the elements mapped and the steps. When discussing the steps make sure to highlight how data was collected, who participated in the workshop, etc. 5. Present the PIP diagram and briefly narrate each of the parts, highlighting the main pathways 6. Highlight why applying this toolbox is helpful for the innovation or intervention a. Underline the main learning points (this will depend on why the PIP was developed. SCAN or CLICK HERE TO ACCESS THIS TEMPLATE Available materials linked to Module 3: • Introductory document • Video tutorial • Quiz for self-assessment (definition and usefulness of PIPs) While developing a PIP and reaching consensus about its depiction with key stakeholders can be a goal in itself, PIPs are also a fundamental step toward an implementation fidelity assessment. The purpose of a fidelity assessment is to ascertain the degree to which an intervention is delivered as intended. (19) In this sense, fidelity can be defined as the extent to which the delivery of the intervention adheres to the PIP. When thinking about the implementation fidelity of an intervention, two assessment areas of can be considered (20): 1. The extent to which the intervention-as-delivered matches the intervention-as- planned. In this type of assessment, we focus on the implementation of key parts of the PIP, such as how health care providers were recruited and/or trained as planned in the intervention. Hence, this assessment is a comparative approach targeting differences or variations between planned and actually implemented activities. This type of assessment is highly relevant as it aids in understanding the implementation (and its variations), thus providing information about the feasibility for scale-up and sustainability, and in determining whether the impacts (or lack thereof) of the innovation or intervention are due to the design itself or poor implementation. (21, 22) While this assessment is performed once an intervention is (or has been) implemented, planning is fundamental for purposes of data collection. 2. The extent to which the intervention-as-delivered is consistent with the implementation of the key, essential, or “active ingredients” that are needed for an innovation or intervention to be effective. (23) This becomes vital for adapting and scaling up interventions and might be a mechanism for cost reduction, due to a better use of resources in different parts of the PIP. There is commonly tension between fidelity (that is, keeping things as proposed in the design or theoretical notion of the innovation) and adaption of the innovation when being implemented in real-world settings. This tension grows more salient when interventions are complex, large-scale, and dynamic, as in the case of a primary health care setting. This is an important challenge that evaluators and implementers will need to constantly address, and this approach seeks to provide insights. (24) As in the prior phase, it is common to perform this type of assessment as the interventions are being implemented, since it can help target areas of implementation that need to be improved. Sometimes this may actually lead to modifying the PIP itself. Conducting a fidelity assessment is important as it sheds light for managers, implementers, and funders, among other stakeholders, on why an intervention may (or may not) achieve its intended impacts. In this respect, it turns the PIP into a functional input to address questions about how and why an intervention works (or not). In addition, an implementation fidelity assessment also helps provide information about the feasibility of scale-up and sustainability and the adaptions required, without compromising the intervention’s essential elements. When considering the World Bank’s project cycle (Figure 1), an implementation fidelity assessment can be particularly useful for stages 5 and 6 (marked with a white star). Figure 1. World Bank Project Cycle An example of an implementation fidelity assessment in innovative primary health care practices is provided by the fidelity assessment conducted in Costa Rica for its service delivery network demonstration project in the Huetar-Atlántica Region. Through a collaborative methodology, a World Bank team, together with stakeholders and technical teams from the Costa Rica’s Social Security Fund, assessed implementation gaps by contrasting the previously elaborated PIPs with actual indicators of implementation and identified essential elements for adaptions, sustainability, and scale-up. This assessment has provided important information for scale-up in other regions of the country country and has identified essential indicators for monitoring implementation quality. SCAN or CLICK HERE TO WATCH 19. Fidelity, M. (2012). Attention to fidelity: Why is it important. The Journal of School Nursing, 28(6), 407-408. 20. Haynes, A., Brennan, S., Redman, S. et al. (2015) Figuring out fidelity: a worked example of the methods used to identify, critique and revise the essential elements of a contextualized intervention in health policy agencies. Implementation Sci, 11, 23. 21. Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. (2007) A conceptual framework for implementation fidelity. Implement Sci, 2(40). 22. Dusenbury L, Brannigan R, Falco M, Hansen WB. (2003) A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Educ Res, 18(2). 23. Saunders, Ruth P., Martin H. Evans, and Praphul Joshi. (2005) "Developing a process-evaluation plan for assessing health promotion program implementation: a how-to guide." Health Promotion Practice, 6 (2). 24. Shiell A, Hawe P, Gold L. (2008) Complex interventions or complex systems? Implications for health economic evaluation. Br Med J, 336(7656). Available materials linked to Module 4: • Guidance on the steps • Video tutorial • Working templates o Appendix 1: PPT template for the meeting to present the implementation fidelity methodology and the virtual bulletin board. o Appendix 2: Example of a generic virtual survey to gather feedback from key stakeholders of the proposed adherence indicators. o Appendix 3: PPT template for the workshop to present the findings from implementation fidelity indicators and moderators. o Appendix 4: PPT template for the meeting with stakeholders to define the core elements. • Quiz for self-assessment (step by step) A fidelity assessment helps stakeholders—including managers, implementers, and funders—understand if an intervention has or has not been delivered as planned. In this respect, it turns the PIP into a functional input to address questions about how and why an intervention works (or not). This helps to contextualize the potential program outcomes (or lack thereof) and provide information about the feasibility of scale-up and sustainability. The aim of the present document is to provide guidance for conducting a fidelity assessment of already-implemented interventions. It can also be helpful for interventions just about to be launched as it might inform stakeholders about the needed information and indicators that should be collected to monitor implementation. In the last fifteen years the concept of implementation fidelity has been operationalized and generally defined in five dimensions that need to be measured, (25, 26) which are summarized in Table 1. Table 1. Implementation Fidelity Dimensions (based on Pérez et al., 2016) (27) Dimension What it measures Adherence If the intervention is implemented as originally described Dose Frequency and duration of exposure to the intervention Quality of delivery Way/manner in which the intervention is delivered Participant responsiveness Degree to which participants are engaged with the intervention Program differentiation Critical features that distinguish the intervention While there are different views on how to conduct a fidelity assessment and a rich discussions exist in each of these dimensions, the present guide is based on the fidelity implementation framework of Carroll et al.,(28) which is summarized in Figure 1. This framework is selected because: • It clarifies and explains the function of each the five fidelity dimensions and their relationship to one another. • It includes additional moderating elements suggested by the diffusion of innovations literature (that is, intervention complexity and facilitation strategies), which are important in optimizing the level of fidelity achieved. (27) • It has been acknowledge as one of the most complete conceptual frameworks for implementation fidelity. (29) Figure 1 depicts the fundamental elements of implementation fidelity, their relationships, and how they are related to broader program evaluation efforts. In this framework (28), adherence includes the subcategories of content, frequency, duration, and coverage, through which we can assess whether an intervention’s active ingredients have been received by the targeted participants as often and for as long as was planned. This relates to the adherence and dose dimensions of Table 1. In addition, Figure 1 also relates these subcategories to the framework’s moderating factors: intervention complexity, facilitation strategies, quality of delivery, and participant responsiveness (which are related to the last three dimensions of Table 1). Figure 1. Carroll’s Conceptual Framework for Fidelity Implementation Conducting an implementation fidelity assessment requires a participatory approach that needs to be coordinated by a team. The same aspects that were highlighted regarding the team composition for the PIP apply here: knowledge, no conflict of interest, credibility, and responsiveness to managers, policymakers, and/or funders. In addition, communicating the outcomes of an implementation fidelity assessment can be delicate as it may reveal deficiencies in the implementation or logic of a program or innovation, which can be politically loaded. Hence, the team will need to have the ability to communicate effectively and, ideally, engage relevant stakeholders at key stages of the implementation process. In addition, rules about the confidentiality of the findings and how they will be disseminated may be an important aspect when setting up the fidelity assessment team, as this might help build trust among the stakeholders involved. While the size of the fidelity assessment team will depend on each member’s availability and experience, the complexity of the intervention assessed, and the timeframe to conduct it, the team should be comprised of three to six members with a wide variety of expertise and seniority. It is important to frame this exercise as a win-win situation in which policymakers and managers are learning about implementation gaps that can be addressed in future iterations of the implementation for better results, which can be politically important. However, to correctly identify such gaps (or successful strategies), trust in the leading team is fundamental. Box 1: Who led the implementation fidelity assessment of Costa Rica’s pilot network model of health services in the Huetar-Atlántica Region? Given the trust and positive rapport that had been established in the process of mapping the PIPs, the same group of external World Bank consultants led the fidelity and sustainability assessment. There were significant economies of scale since the consultants already knew the program, correct terminology, actors involved, policymakers, and managers very well. This shortened the timeline to conduct the assessments. It is important to stress that the PIP and fidelity/sustainability assessment can be conducted by the same or independent teams. This will depend on the specific context. The first task of the implementation fidelity assessment team will be to map adherence indicators and moderating elements considering the PIP. Dimensions of adherence Based on Carroll et al (28) adherence is defined as the reception of the “active ingredients” (content) of the intervention by participants (coverage) with the appropriate and planned dosage (frequency and duration). This means that we need to make sure to include content, coverage, frequency, and duration indicators. • Content delivery refers to how much of the intervention was in fact implemented as originally designed. An example from the health needs assessment component of the demonstration project in Costa Rica is the proportion of health areas with an endorsed report. This is a key content measure because the endorsement of such reports triggers other fundamental activities. • Coverage is defined as the number of people intended to receive the intervention (or specific activity of the intervention) compared to the people who actually received it. For example, when assessing the PIP of remote specialized care in Costa Rica’s demonstration project, a coverage indicator was the percentage personnel within the network, trained to provide remote care. • Frequency looks at how often content elements are delivered to participants and whether this frequency responds to what was envisioned. An example from Costa Rica’s demonstration project is the following indicator: temporal distribution of community activities for the promotion and primary prevention of T2D, which is not only assessing that the activities are implemented, but also its frequency. • Duration reflects how long content elements were delivered to participants and whether this was the expected duration. For example, in the integration of outpatient surgery within the network service delivery in Costa Rica’s demonstration project, a key element is to constantly update the availability of resources and, as such, a duration measure was the periodicity with which the list of available surgeons in the region is updated. Not having an updated list would hamper the regional ability to deliver outpatient surgeries. Moderating factors Carroll et al (28) suggest looking at four interrelated moderators of fidelity that focus on the complexity of the intervention, facilitation strategies, the quality of delivery, and participant responsiveness. Complex interventions have been described to have a greater potential for variation in their delivery, so this might be an important moderator in bringing challenges to implementation. There can also be enabling or facilitation strategies, defined as factors that optimize implementation fidelity. Facilitation strategies are often specific actions that support implementation in an initially unplanned way. Quality is a key moderating determinant of program implementation fidelity. Despite efforts to implement an intervention or innovation, if it is not performed with adequate quality standards it is unlikely that it will adequately lead to subsequent loops of action or delivery. Finally, understanding key participants ’ level of engagement is fundamental to understanding the intervention’s reach, and engagement often depends on the intervention ’s relevance and acceptability as perceived among participants or beneficiaries. Essential steps in the implementation process While the PIPs include many elements that represent the program architecture, not all of these elements will be translated into an adherence indicator. From an implementation fidelity perspective, most adherence indicators will emerge in looking at three transitional sections of the PIP: • From resources to activities • From activities to products/outputs • From products/outputs to some expected changes While looking at such transitions, a key question is whether the step implies a fundamental process or assumption for the intervention or innovation to work. It is not uncommon to start with a longer list of indicators followed by a reduction based on the perceived relevance. Context of the implementation Accounting for the implementation context is extremely important to identify moderators. Political factors, crises, changes in the budget, geography, and safety are some of the factors that can shape the context in which the innovation or intervention is being implemented. Implementation fidelity analyses require identifying such factors. In summary, the implementation fidelity assessment team will need to identify adherence indicators linked to content, coverage frequency, and duration that capture key elements of the implementation process. In addition, the team will need to develop a strategy to identify implementation fidelity moderators, and the approach to identify the moderators will be shaped by the specifics of the intervention’s context. Box 2: How did the implementation fidelity assessment team of Costa Rica’s pilot network model define adherence indicators and design an approach to document moderators? The external World Bank consultants mapped content, coverage, frequency, and duration indicators based on the PIPs previously conducted. One of the consultants independently defined indicators based on an extensive review of the assumptions and functional mechanisms of the PIP. These indicators were placed on a matrix identifying them based on their type (see Table B2.1 summarizing the initial indicators linked to the needs assessment component of the pilot intervention). Based on this table, the full group of external consultants reviewed, redefined or reworded some of the indicators; for example, for the first content indicator, one of the aspects that was discussed and modified was the geographical reference of the local teams. Table B2.1 Selected examples of the initial implementation fidelity indicators for the needs assessment component of Costa Rica’s pilot network model Content Coverage Duration Frequency # defined local teams % health providers Evidence of local (regional/local) familiar teams’ sustainability with the endorsed (qualitative) needs assessment Evidence of local % network methodological beneficiaries exposed adaptations to the dissemination of (qualitative) the network’s needs # tabulated epidemiological indicators # focus groups Proportion of health Validity of the units with endorsed endorsed reports reports (qualitative) Note: This is a subsample of the indicators that were actually defined a full description can be found in this report While the implementation fidelity team is in charge of defining the adherence indicators (based on the PIP) and designing an approach to document moderators, it is very important to generate consensus among key stakeholders about whether the proposed adherence indicators accurately capture the key parts for a successful implementation. The team can omit indicators or define them incorrectly. Feedback is therefore fundamental. This is equally important in identifying and analyzing questions about moderating elements. Securing such feedback needs to be a transparent and efficient process. Accordingly, it is highly recommended to use a virtual bulletin board (such as Padlet) where the team can post relevant documents and the indicators themselves and give access to key stakeholders for feedback purposes. Such virtual bulletin boards allow participants to post comments and reply, as well as to add links with polls or surveys to specifically rank or validate indicators. Hence, the implementation fidelity assessment team will be in charge of developing and overseeing the virtual bulletin board while the assessment is taking place. Clear instructions about free access and navigation will also be fundamental activities for the team. Box 3 exemplifies the use of the virtual bulletin board used in Costa Rica implementation fidelity analysis. Box 3: Example of the virtual bulletin board designed by the assessment team to secure feedback on adherence indicators and moderators relating to Costa Rica’s pilot network model For the implementation fidelity analysis in Costa Rica, the external consultants designed a Padlet board. The board’s first interphase allowed stakeholders to (i) view an introductory tutorial video summarizing the objectives of the bulletin board, how to navigate it, and the expected tasks to be performed by stakeholders; (ii) view key announcements; (iii) access the PIPs of each component; (iv) access the proposed indicators of each component and to the assessment and feedback link; and (v) access key reference documents. It is important to underscore that only stakeholders participating in the analysis were able to access the board. 1 2 3 4 5 For each intervention component, clicking on tab 4 would open the content, coverage frequency, and duration indicators. For each of them, stakeholders could leave messages that would be seen by any of the other participants. Participants could then respond in a transparent and open way. This tab was also used to pose clarifying questions to the consultants. In addition, on the left-hand side of the screen there was a tab with a link to a RedCap questionnaire in which participants could rate each indicator according to its relevance and measurement feasibility, propose alternative wording, report the actual measure of the indicator, or recommend sources of information to assess it. Once the implementation fidelity team has conducted an initial identification of the adherence indicators and has designed the virtual bulletin board —posting relevant documents, the indicators themselves, and the survey to assess the indicators—it will be necessary to schedule a meeting with the stakeholders who will be providing feedback to present the fidelity methodology and the participatory assessment approach. During this meeting there are four key aspects that should be addressed: a) Recap the logic of performing an implementation fidelity analysis and the relevance from a managerial and policymaking standpoint. b) Establish the importance of conducting the implementation fidelity analysis through a participatory and iterative process. In this meeting it is fundamental to establish an open and trustful dialogue, making sure that the different stakeholders feel engaged and confident. c) Present the virtual bulletin board, including the documents uploaded, the discussion section for each adherence indicators, and the link to the survey to assess each of the indicators. Make sure to explain that the logic is that each stakeholder will review the indicators, provide any feedback on the discussion board if needed (that is, when requiring opinions or inputs from other stakeholders or the implementation fidelity team), and demonstrate the use of the adherence indicator assessment survey. d) Establish a clear timeline on when the survey needs to be submitted, as well as the estimated average time that the whole review and feedback process will take. For this session, the implementation analysis team will need to (a) have a visual presentation (see Appendix 1 for some suggestions), (b) provide ample time for questions, and (c) be ready to manage unfavorable reactions or resistance to the proposed process, which might arise from fear of political reactions, workload, or lack of capacity. Considering the specific context of each intervention this will need to be anticipated and discussed by the implementation fidelity team. Once the stakeholders start using the electronic bulletin board, the implementation fidelity team will need to moderate the comments and respond to any specific questions that arise in the chats. The logic of these chats on the bulletin board is to motivate reflection and the exchange of ideas in a transparent and traceable way. In addition, the implementation fidelity team will need to design and upload the final instrument through which the stakeholders will assess and submit individual comments to each of the fidelity adherence indicators. As previously mentioned, the indicators arise from specific parts of the PIP and should be linked to the content, coverage, frequency, and duration of the primary health care innovation or intervention. Based on the initial implementation adherence indicators an online survey (using SurveyMonkey, Qualtrix, Google Forms, or RedCap) will be designed. The goal will be to secure stakeholder feedback regarding: a) Relevance of the indicator in capturing essential parts of the innovation or intervention. b) Wording of the indicator (while this may seem as tedious, specific language tailored to the concepts and terms used locally is extremely important). c) Feasibility of measuring such indicator. d) Data availability (including an open-ended question where stakeholders can state specific surveys, documents, or people who should be interviewed). e) Identification and wording of any potentially omitted indicator. This survey will need to be user friendly to facilitate the timely participation of stakeholders. The link to the survey will be posted on the virtual bulletin board. Appendix 2 offers a generic example that might be helpful when designing this questionnaire. Once the online survey is filled out by stakeholders, the implementation fidelity team will need to analyze the feedback. The typical findings will lead to a combination of results that are summarized in Table 2. Table 2. Possible stakeholders’ feedback to adherence indicators and suggested actions Feasibility of Relevance of Wording of measuring Suggested action the indicator the indicator the indicator Keep the indicator as initially designed Reword the indicator according to recommendations Assess whether it should be kept to show the gap in information Assess whether it should be kept to show the gap in information and reword it Reassess whether the indicator provides any useful information Reassess whether the indicator provides any useful information and reword it Remove indicator While some comments might be convergent and easy to solve, others will be more difficult and may show conflicting results in terms of the feedback provided. This may require recontacting some of the stakeholders to gain further understanding before deciding whether or not to keep the indicator. Some stakeholders will want to remove or modify important indicators because they might show unfavorable results. The implementation fidelity team’s role is to negotiate, always highlighting the relevance from a policy and political standpoint of knowing where the implementation gaps are and how can they be addressed for further iterations or scale-up of the intervention. After assessing the initially proposed adherence indicators, the implementation fidelity team will need to assess any additional stakeholder-proposed indicator. Some of the aspects that need to be considered are whether the indicator is aligned with the PIP, whether it is redundant with any of the other indicators, and what information will be brought to the fidelity assessment. Keep in mind that stakeholders might at times propose indicators that they know will lead to good results or that are solely related to some of the activities they are in charge of, while not necessarily considering their relevance to the fidelity assessment as a whole. It is important to keep in mind that the final product from this analysis is the adherence indicators that will be used for the assessment of implementation fidelity. Box 4: Example of some adherence indicators selected for Costa Rica’s pilot network model The definition of adherence indicators for Costa Rica’s pilot model was complex as it involved six different PIPs, entailing multiple functional processes that needed to be addressed. Some examples of the initial and final indicators linked to the component and focused on ambulatory clinical services (hospital de día) are presented in Table B4.1. The goal is to illustrate how the participatory definition of indicators through the electronic Padlet board led to the definition of indicators with a defined high degree of consensus regarding relevance and feasibility among stakeholders. These indicators were ultimately used to actually perform the implementation fidelity analysis. Table B4.1 Selected Adherence Indicators Before and After the Participatory Review Linked to the Outpatient Clinical Service Component of Costa Rica’s Network Pilot Original indicators Final indicators Average hospitals with an outpatient The indicator was deemed relevant and clinical services coordinator in relation to feasible, and no changes were suggested the total number of hospitals in the region Periodicity with which the lists of medical This indicator was considered irrelevant coordinators is reviewed and/or unfeasible Percentage of referred patients from Percentage of patients referred to other health areas compared to those outpatient clinical services from any health served at the center of origin area Average waitlist time before a patient is This indicator was considered irrelevant referred to outpatient clinical services and/or unfeasible by some, but the leading team considered that was relevant to keep it Note: This is only a subset of the actual adherence indicators. The table illustrates some potential outcomes triggered by the participatory review of the indicators: • in green, an indicator that was kept and suffered no change. • in yellow, an indicator for which the wording and specificity was rephrased. • in gray, an indicator for which there was consensus that it was irrelevant and/or unfeasible. • in white, an indicator that some considered inadequate or unfeasible but for which there was no consensus. In such cases the leading team decided, and in this particular case the indicator was kept as originally proposed. Once the list of adherence indicators is ready, the implementation fidelity team will need to identify the sources of information that were recommended for each of the indicators by the stakeholders in the online survey. As previously explained, such sources commonly entail administrative data, documents, and key informant interviews. The support of the stakeholders will be fundamental to access administrative data and documents and gain access to key informant interviews. For key informant interviews several aspects should be considered: (i) Before contacting key informants, the fidelity analysis team will need to draft: • A brief email or letter explaining the purpose of the interview, highlighting that participation is entirely voluntary and that the information discussed would be kept as confidential as possible. In addition, an expected duration of the interview should be provided, along with an explanation as to whether it would be performed in person, online, or by telephone. • An interview guide designed based on the fidelity indicators from which we are seeking information. This guide should not be shared with interviewees but it is useful to estimate the expected duration of the interview. (ii) Send the invitation to key informants. Sometimes it is better if the invitation comes from one of the trusted stakeholders. Make sure to provide some daytime alternatives to conduct the interview if key informants agree to participate. (iii) Key informants are usually government or NGO officials at the local or national level. They may also be health care workers with some level of managerial tasks or leadership. Interviewing such types of actors is usually exempted from IRBs, but if other types of actors are involved more formal IRB reviews and consent forms will be needed. (iv) A key decision of the fidelity analysis team will be to identify who will be conducting the interviews. Ideally this person should have some level of experience or training in this endeavor. If none of the team members has experience or training it might be a good idea to hire a professional interviewer or to get some basic training, and this might also provide a good opportunity to review the interview guides. (v) While it may be ideal to record the interviews, some governmental officials do not feel comfortable being recorded and they tend to be more open if not recorded. If this is the case notes will usually be sufficient. Make sure to train whoever is interviewing for adequate and detailed notetaking. If recording is available transcriptions will be needed for data analysis. (vi) Because key informant interviews are time consuming, the fidelity analysis team will need to be strategic regarding the number of interviews, always giving priority to those that inform several indicators as well as those linked to indicators for which no other data is available. Keep also in mind that interviewed stakeholders may sometimes refer you to other key informants or provide further documents or data sources for review. The implementation fidelity team will need to thoroughly review how the gathered data (both qualitative and quantitative) informs each of the predetermined indicators. In doing so they will need to critically assess several aspects: (i) Credibility of the source. For example, if a key informant provides a percentage or amount it would be important to inquire if it is traceable to a report or data set or whether the informant explained the source backing such statement. (ii) Consistency between data sources. Indicators can often be assessed with information provided by more than one source, for example, a key informant interview and a report or two different informants. When consistency is found, this provides more robust evidence. However, it is not uncommon to find inconsistencies. Whenever this happens, the team will need to assess potential sources, for example, the key informant might be referring to an updated report. In other situations, such inconsistency might imply that the implementation process is being perceived in different ways or through different lenses, which would be an outcome of the analysis in itself. (iii) Magnitude of the change entailed in the indicator. While some indicators can assess functional aspects, such as publishing a guideline, it is common to find indicators that measure a process or output. Examples of the latter include aspects such as percentage of health care professionals that attended a course targeted at understanding the new guidelines or change in the adherence to the clinical guidelines during care. For indicators that entail such types of measurements, magnitude is relevant from a fidelity standpoint. Following the foregoing examples, a guideline may have been published and courses for providers offered, but it may be the case that only 5 percent of the overall health care force has taken the course. In this case, while the course is being offered the coverage in magnitude is very poor. Taking these considerations into account, a spreadsheet should be generated in which indicators will be in rows, while columns will assess the availability of information; if information is available, the credibility, consistency, and magnitude; and, lastly, a score based on such information. Since the results of the fidelity analysis will need to be presented to stakeholders, it is always very important to be able to track all the sources of information that feed each cell. Discussions about a finding based on sources frequently arise. Since quantitative and qualitative data inform implementation fidelity, in general indicators can be assessed through scales, such as a high, moderate, or low degree of fidelity. These scales need to be related to the credibility, consistency, and magnitude criteria. While each analysis might have its particularities, a rule of thumb is that an indicator that is credible, consistent, and with adequate magnitude should yield a high fidelity assessment, while one with low credibility, low consistency, and low magnitude should lead to a low implementation fidelity score. The implementation fidelity team will need to predetermine reasonable criteria to score indicators based on the findings. Lastly, it is very important to be sensitive about the best way to present findings to stakeholders. The final goal of the implementation fidelity analysis is to provide actionable information to understand the implementation process and to generate information about its sustainability and scale-up. This may imply using visual ways of portraying data that are more “user friendly” (see Box 5). Box 5: Example of the data analysis and findings summary for Costa Rica’s pilot network model In Costa Rica the adherence indicators were assessed and summarized based on a visual scale in which the darker the color the better the implementation fidelity. This was decided as a mechanism to portray the results in a clear but politically adequate way. Behind the colors, there is an account of the credibility, consistency, and magnitude of the evidence highlighted for each indicator. In addition, because data was not available or only partially available to assess several indicators, data availability was also assessed since it equally informs stakeholders about data need in the future. In addition, because the pandemic greatly affected the implementation of certain processes, some indicators were marked with a “P” to identify a challenging implementation context. Table B5.1 summarizes how this data was presented for the needs assessment component. Table B5.1 Sample of Indicators Summarizing the Implementation Fidelity Findings for the Needs Assessment Component of Costa Rica’s Network Pilot Indicator Available evidence Fidelity implementation level Existence of identified teams with trained members Type and number of need assessment recorded Evidence regarding the participation of Community Health Boards in the - need identification Members of support team and clinical management teams are - aware of the need assessment Proportion if the health areas in the region with endorsed reports Community dissemination activities - P Note: These are only a sample of the full indicators that summarized the findings for the needs assessment component, for a full description you can review this report. As has previously been mentioned, a fundamental goal of the implementation fidelity analysis is to inform managers, funders, policy makers, and other stakeholders about the implementation process. Hence, once data has been analyzed it is necessary to discuss such findings with the relevant stakeholders before disseminating them. Such discussion is geared at increasing shared understanding as an opportunity to amend any indicators that were incorrectly scored and/or to bring additional data to help document and measure the indicators. From a participatory perspective, the implementation fidelity team should organize a workshop in person or online with key stakeholders to present and discuss the findings. In general, the stakeholders that had participated in prior steps of the analysis should also be invited to the workshop. It is suggested that the workshop include: • A brief reminder of the purpose of the implementation fidelity analysis • A recap of how indicators were defined • A synthesis of the data sources and results for each indicator It will be very important to plan time for discussion and feedback after each indicator. In addition, the implementation fidelity team should make sure to have a good facilitator and someone taking notes. Appendix 3 provides some generic PPT templates to prepare the materials for the Workshop. While the ideal process is to only require one workshop to present and bring consensus about the findings, if the first workshop generates a considerable lack of consensus and/or a substantial amount of new information is provided or requested, this may lead to the need to go back to data analysis and conduct a second workshop. Steps 3, 4, 5, 8, and 9 require stakeholder involvement. Keep in mind that stakeholders are individuals who influence or are influenced by the implementation of an intervention or who perhaps have specific contextual knowledge. While no formal mechanisms to identify stakeholders have been presented in this toolbox, specific methodologies exist, such as the NetMap analysis. This is a particularly useful tool to determine which actors are involved in a given network, how they are linked, how influential they are, and what their goals are. (30) More specifically, through a participatory approach, NetMap draws a network map of the actors involved in a given policy arena and characterizes the different links between them. (30) If there are sufficient resources and time, it is highly recommended to use a methodology like NetMap to identify the stakeholders for the aforementioned steps of the toolbox. A major debate when thinking about the implementation, sustainability, and scale-up of interventions revolves around fidelity and adaptation. Keeping in mind the definitions summarized in Table 3, interventions, particularly when being scaled-up to other settings, require adaptations. Hence, it is fundamental to address the fidelity-adaptation balance and scope. Table 3. Definitions of implementation Fidelity and Adaptation (based on Pérez et al.(27)) Degree to which an intervention is implemented as planned by its developers/designers, which responds to a theory of change (that is, Fidelity what pieces need to be implemented to achieve an expected outcome) Bringing changes to the original design of an intervention. Such modifications can potentially be positive but can also threaten the Adaptation theoretical basis of the intervention, resulting in a negative effect on expected outcomes One way of addressing the fidelity-adaptation balance is by identifying the intervention’s core elements. Core elements are often described as essential components of an intervention that are believed to be intrinsically linked to the effectiveness of an intervention and should, therefore, be kept intact.(31) Another way of looking at this is through the assumption that an intervention impact contains certain elements (core elements) that are responsible for its success,(32) and such elements are usually strongly linked to the mechanisms that lead to the expected impacts (that is, causal mechanisms). To define an intervention’s core elements, involving stakeholders is generally a good idea because they contribute to a consensus-based identification and to dissemination. To conduct such process a four-step process is recommended. 1) Understanding of what a core element is. Since stakeholders are not usually familiar with implementation science concepts, the leading team should organize a meeting with stakeholders—generally those who also participated in the PIP and/or the implementation fidelity analysis—to discuss the core elements of an intervention and the mechanisms to define them. This should be a short meeting with two specific goals: (i) define the concept of “core element” and (ii) assign specific tasks to stakeholders to help define core elements of the analyzed intervention. Regarding the second goal, core elements can often be identified through the PIPs, the implementation fidelity indicators, and the sources used to assess them. 2) Definition of the core elements of an intervention. To avoid overwhelming stakeholders, who at this point should have already provided a good amount of feedback and time, the implementation fidelity indicators can be used as an expedited mechanism to secure their feedback in defining the core elements. Through the electronic bulletin board already used for the implementation fidelity analysis, an online survey (using Qualtrix, RedCap, Google Forms, etc.) can be launched, asking the stakeholders involved to prioritize the five elements that the intervention should always have to achieve its expected outcomes. This requires listing all the indicators and allowing stakeholders to select no more than five (or the predefined threshold set by the leading team). This process needs to be very clearly explained and illustrated to stakeholders during the meeting to make sure that they perform this task properly. Appendix 4 provides some generic slides that might be used to structure this meeting. 3) Analysis of the core elements. Based on the prioritization performed by stakeholders, the leading team will need to assess what the top selected indicators are. As several stakeholders should be performing this exercise, results can be treated as a voting scheme and indicators receiving the highest number of mentions should be identified as indicators linked to core elements. The team will need to use their broader understanding about the PIP and the implementation fidelity analysis to translate the indicators into core elements of the intervention. For example, in the indicator “Evidence that the Commission knows the surgery waitlist linked to the electronic file” three potential core elements may emerge: the Commission, the actual list, and the electronic file. A visual portrait should be generated of core elements, as they are commonly interrelated (see Box 6). Box 6: Example of the Core Elements Analysis for Costa Rica’s Pilot Network Model In liaison with stakeholders, the implementation fidelity indicators helped identify the core elements of the distinct components in Costa Rica’s pilot network model. The identification of indicators relied on the Padlet board previously described, in which there was a tab to prioritize the five most relevant indicators per PIP to achieve adequate implementation. Subsequently, the lead team summarized the five indicators with the top mentions and transformed them into a functional diagram. As an example, Figure B6.1 summarizes the functional diagram of the core elements for the needs assessment component. Figure B6.1 Core elements of the health needs assessment component of Costa Rica’s pilot network How to document Health need assessment Who should participate of the population • Development of the need Institutionalization • Community Health Boards report • Mandatory • Support Team • Endorsement of the • Implementation guide • Clinical Management Teams report Continuity in the collective work Understanding of the social determinants of the communities Integrated work plan accounting of local helath needs 4) Workshop with key stakeholders to present and reach consensus around the core elements. Once the core elements have been analyzed, the leading team will organize a workshop to present the selected indicators based on the prioritization and the translation of such indicators into core elements. The visual summary (that is, figures or diagrams) of such elements will be very important for the workshop. The main goal of the workshop is to critically assess with stakeholders the findings of the analysis and to make any further modifications necessary to the core elements identified. The workshop will take approximately two to three hours and can be conducted virtually (Zoom, Teams) or in person. An aspect that can be of major relevance to stakeholders involved in the design, funding, implementation, and evaluation of interventions is that performing an implementation fidelity analysis and identifying core elements can be transformed into a continuous mechanism to internally monitor the quality of the intervention’s implementat ion. In this respect, a subset of the implementation fidelity indicators—including those related to the core elements—can yield an internal quality monitoring system of the intervention. In this context, an internal quality monitoring system may refer to the measurement of procedures that are relevant to the intervention and continuously help in reviewing and assessing whether the implementation is sustained with the expected quality. To achieve this, indicators need to be refined to make sure that they comply with SMART properties, can be measured at relatively low cost, and can transform into feasible feedback loops in the implementation process. Achieving this is a process that exceeds the scope of this guide but it is indicated to highlight the extensive value of performing a PIP and an implementation fidelity assessment and defining an intervention or innovation’s core elements. Once the implementation fidelity analysis and the definition of the core elements have been concluded and reviewed by the stakeholders involved, the leading team will need to discuss the best mechanisms to disseminate the findings. Such discussion should be guided by the audience intended to be reached. Table 4 provides some ideas, although this is by no means an exhaustive list. In addition to audience, dissemination strategies might also be guided by contextual factors such as politics and available resources. Table 4. Possible Dissemination Strategies for the Implementation Fidelity Analysis and the Definition of Core Elements According to Audience Type of audience Possible dissemination strategy Donors/financers -Report -Policy brief -Presentation Implementers/managers -Report -Policy brief -Presentation -Workshop Evaluators -Report -Policy brief -Presentation Health care force (or task force -Policy brief involved in the implementation) -Presentation -Workshop Beneficiaries -Video -Social media posts -Posters or flyers at the point of service Taxpayers/public opinion -Video -Social media posts -Media spots (TV, Radio, Newspapers) Researchers/academia -Report -Academic peer-reviewed article -Conference presentation Each team will need to define the best strategies, but it is fundamental to disseminate the findings in a way that directly or indirectly impacts implementation sustainability and scale- up of the intervention. It is also important to highlight the value of disseminating interventions as a mechanism to inspire other regions, systems, or countries, as well as to help in building the body of scientific literature about what works to provide essential high- quality services of through equitable mechanisms. Box 7 discusses some of the dissemination outlets used for a specific example. Box 7: Dissemination strategy for the implementation fidelity analysis and core element definition for Costa Rica’s pilot network The dissemination strategy comprised three extensive and one abbreviated report; one video targeted to a general audience; one workshop with national and regional stakeholders; academic conference presentations; and one academic paper targeted at disseminating the innovation to other settings and countries. Except the paper and the conference presentations, the following links lead to the various aforementioned dissemination materials. Implementing Integrated Health Service Networks in the Huetar-Atlántica Region of Costa Rica: An Assessment of the Process Authors: Martinez, Luis Carlos Vega; Vilar-Compte, Mireya; SCAN or Gaitan Rossi, Pablo; Villar Uribe, Manuela CLICK HERE TO READ Process Evaluation of the Implementation of Integrated networks for the provision of Health Services Authors: Vilar-Compte, Mireya; Gaitán-Rossi, Pablo; SCAN or Velázquez, Natalia Rovelo; Bernal, Óscar; Villar Uribe, Manuela CLICK HERE TO READ Fidelity and Sustainability in the Implementation of the Integrated Networks for the Provision of Health Services of the Huetar-Atlántica Region, Costa Rica SCAN or Authors: Vilar-Compte, Mireya; Gaitán-Rossi, Pablo; CLICK HERE Velázquez, Natalia Rovelo; Bernal, Óscar; Villar Uribe, Manuela TO READ Analysis of Primary Health Care System Capacity in the Huetar Atlántica Region of Costa Rica Authors: Eesha Desai, MS, Joseph Ross, MPA, Natalia Rovelo, SCAN or Oscar Bernal, MD, PhD, Jess Wiken, Zeina Siam, PhD, MS, Dan Schwarz, MD, MPH, Manuela Villar Uribe, PhD, MPH, with CLICK HERE technical contributions and review by the Program for TO READ Strengthening the Provision of Health Services (PFPSS) of Costa Rica 25. Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health education research. 2003;18(2):237-56. 26. Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: are implementation effects out of control? Clinical psychology review. 1998;18(1):23-45. 27. Pérez D, Van der Stuyft P, Zabala MdC, Castro M, Lefèvre P. A modified theoretical framework to assess implementation fidelity of adaptive public health interventions. Implementation Science. 2016;11(1):91. 28. Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implementation science. 2007;2(1):40. 29. Hasson H. Systematic evaluation of implementation fidelity of complex interventions in health and social care. Implementation science. 2010;5(1):1-9. 30. Research WU. NetMapping https://mspguide.org/2022/03/18/netmapping/#:~:text=Net%2DMap%20merges %20characteristics%20of,different%20links%20between%20the%20actors.2023 31. Carvalho ML, Honeycutt S, Escoffery C, Glanz K, Sabbs D, Kegler MC. Balancing fidelity and adaptation. Journal of Public Health Management and Practice. 2013;19(4):348-56. 32. Bopp M, Saunders RP, Lattimore D. The Tug-of-War: Fidelity Versus Adaptation Throughout the Health Promotion Program Life Cycle. The Journal of Primary Prevention. 2013;34(3):193-207. SCAN or SCAN or SCAN or SCAN or CLICK HERE CLICK HERE CLICK HERE CLICK HERE TO WATCH TO WATCH TO WATCH TO WATCH SCAN or SCAN or SCAN or CLICK HERE CLICK HERE CLICK HERE TO WATCH TO WATCH TO WATCH SCAN or CLICK HERE TO ACCESS THIS TEMPLATE SCAN or CLICK HERE TO ACCESS THIS TEMPLATE SCAN or CLICK HERE TO ACCESS THIS TEMPLATE SCAN or CLICK HERE TO ACCESS THIS TEMPLATE Throughout the four modules that comprise this toolbox, several implementation science- based tools were introduced: PIPs, implementation fidelity analysis, and identification of active ingredients or core elements of interventions to aid in resolving the tension between fidelity and adaptation. The use of such tools can be used individually or in liaison with others. For implementing interventions, we recommend the use of the three approaches, ideally starting during the design stage and subsequent implementation. In addition, the methods proposed in the toolbox are highly pragmatic, based on the lessons learned from the COVID-19 pandemic about the use of several online applications, and participative so as to empower local stakeholders and foster ownership of the outcomes emerging from the application. Nevertheless, for example, adaptations can be made to rely on more in-person solutions. This will depend on the context and resources available. In addition to the methods, the toolbox offers examples and templates to facilitate its application. These should not be taken as prescriptive elements, but rather as tools that might be of use and can reduce the burden on practitioners in charge of such assessments. The idea behind this toolbox is that being more critical about implementation and opening the “black box” between design and outcomes will help us sustain and scale better interventions. In addition, the toolbox is also focused on documenting how primary health care interventions are implemented—something that has been rarely done—and, as such, looks to expand the collective knowledge about how to better serve communities through the application of essential services linked to public health’s core function of assurance. Despite this specific focus, the toolbox can be applied to different types of interventions. As you use the toolbox, it would be highly enriching for you to inform us about things that have or have not been useful, and to share any documentation that can enlighten future users.