Evaluation of .the Core . . .. . Course on He-alth Sector • .•. , I • :: • . •. • ileform. anci . Sustainable,··. . . .: ,, . ,, . ;:.· .~ . , . ': • "' ·.·Financing_.-. ,·' . . . __ · ."/ ."·_., - ··, .. . ; ·: .. ,' . . ". ., .. '•; . ' . ,,. . ' . . . ·, ' ." ; . . SITRC • HG 3881.5 .W57 E82 NO.ES98-11 . . . , Eckert. William A. (World ,, : . . Bank Staff) ( , ' . Evaluation of the core ... ,, . ,• . ,, . - t • '_ I •• •: , . . . . , •, ' ,. •• I ' •• • }f )i : :;: HG :', 3881.5 .W57 E82 NO . ES98- 11 Evaluation of the Core Course on Health Sector Reform an·d Sustainable Financing This report was prepared by William Eckert with the assistance of Ray Rist, Karen Gilbride, Eugene Boostrom and Song Li Ting Fong. EDI Evaluation Studies Number ES98-11 Evaluation Unit Economic Development Institute • The World Bank Washington, D.C. SECTORAL & IT . REsn, ,r:,,.,.r= f"c:",TER • ,MN O5 ?001 WORLD BANK Copyright© 1998 The International Bank for Reconstruction and Development/The World Bank 1818 H Street, N.W. Washington, D.C. 20433, US.A. The World Bank enjoys copyright under protocol 2 of the Universal Copyright Convention. This material may nonetheless be copied for research, educational, or scholarly purposes only in the member countries of The World Bank. Material in this series is subject to revision. The ~dings, interpretations, and conclusions expressed in this document are entirely those of the author(s) and should not be attributed in any manner to the World Bank, to its affiliated organizations, or the members of its Board of Executive Directors or the countries they represent. 1111111111 690/006 E2002 Evaluation of the Core Course on Health Sector Reform and Sustainable Financing TABLE OF CONTENTS Part I. Executive Summary............................................................ . 3 Introduction ........................................................................ . 5 Part II. Course Objectives .............................................................. . 7 • Overall Course Objectives ........................................... . 7 • Process/Implementation Objectives ............................. . 7 • Outcome Objectives ..................................................... . 7 • Module Objectives ....................................................... . 8 Part III. Evaluation Design and Methods ......................................... . 10 • Research Designs .......................................................... _. 10 + Data Collection Methods .............................................. . 11 • Analytic Methods ......................................................... . 13 • Study Limitations ......................................................... . 14 Part IV. ........ . Evaluation Results ............. :........................................ - 16 IV. A. Participant Demographics ................................................... . 16 IV. B. Formative Evaluation Results ............................................. . 17 IV.C. Summative Evaluation Results ........................................... . 19 • Course Expectations ....................................,. ................. • 19 • Overall Course Evaluation ............................. :.............. . 21 • Course Achievement ..................................................... . 25 • Evaluation Questions .. .... .... ......... ......... ...... ........ .. .. .... .. . 26 Part V. Conclusions .................................... :.................................... . 39 Part VI. Annex .................................................................................. . 43 Evaluation of the Core Course OD Health Sector Reform and Sustainable Financing Executive Summary The Economic Development Institute (EDI) of the World Bank sponsored a core course on health sector reform and sustainable financing, held in Washington, D. C., in • October/November, 1997. The course lasted over 4 weeks and consisted of 8 sep~ate week-long modules. It enrolled a total of 78 participants that included mid-career and senior program managers involved in their countries' health sector reform efforts, and World Bank staff. A number of different teaching/learning methods were used throughout the course, including distance learning, applied software, case method of learning and the use of internationally known experts. Costs for the course, while still incomplete, were in the neighborhood of $1 million, with approximately 40% of these cost shared and 1.5% recovered. Both formative and surnmative evaluation strategies were used to evaluate each of the 8 modules and the overall course. A variety of methods was used in these evaluations. Formative evaluations consisted of using continuing and open-ended feedback procedures to provide information to the course planners which allowed them to make adjustments in individual modules and for the overall course: The surnmative evaluation methods included specialized and standardized End-of-Module (EOM) and End-of-Course (EOC) questionnaires. It also used a pre/post design to administer a set of cognitively based questions to estimate the effects of the distance learning method used in Module 1. A number of limitations to the evaluation study are identified and their effects upon the study's results are presented. Results from the evaluation show an overall high approval rating by participants with a number of areas and items emerging as specific strengths and weaknesses. It also showed a great deal of openness and flexibility on the part of the course organizers to utilize feedback and make mid-course corrections. Following are some of the major evaluation findings: • A vast majority of participants viewed the course favorably. This included support activities, course content and preparation, and the amount they learned from their participation in the 4 to 5 week course. There was significant variation among modules on the degree ofpositive reaction by participant, and significant variation among various items. • In-course feedback opportunities were built into the program and appear to have been used effectively by staff to make mid-course correqtions. A strong example of this interest in course quality and willingness to make changes·can be seen in Module 6. 3 Two sessions of this module were offered in successive weeks. Results from the first week's experience was used to make changes in the second week's program. The results showed an overall increase in effectiveness across groups and items. • Pre/post self assessments of how much participants learned in each module showed very large increases. However, Module 1 utilized a limited set of cognitive-based questions to measure the degree of"learning" in a more objective manner. Results from this limited test show that much less may have been learned than was suggested by the pre/post self assessment results. • There were some important differences between subgroups. Results from the study were broken down by two major subgroups: l. Participants with and without certificates/degrees in economics; and, 2. Participants with 10 years or less and more than 10 years work experience in the healthcare field. The greatest differences appeared between those with and without training in economics. These differences between those with and without training in economics appears to be related to the subject content and technical difficulty of the modules' material. • The most common complaint by participants related to the amount of daily preparation required, especially the amount of required reading during the evening, and the feeling that the time allowed for preparation was inadequate. Participants also felt that many of the examples used during modules did not relate adequately to their national situations. Despite these perceived shortcoming, participants felt that the course provided solid technical training in healthcare reform, presented by competent experts, and the experience was highly effective in meeting their needs and requirements. 4 Evaluation of the Core Course on Health Sector Reform and Sustainable Financing PART 1: INTRODUCTION The World Bank and its client countries' concern with health sector reform prompted a learning program initiative that has evolved into the Economic Development Institute' s (EDI) core course on Health Sector Reform and Sustainable Financing. The primary goal of this program is to provide intensive training for country clients and World Bank staff on different options and approaches to health sector development. EDI has developed this course after a lengthy design and development phase. The program consists of 10 modules aimed at mid-career and senior program managers who are involved in various aspects of health sector reforms in their respective countries, and World Bank staff. An intensive pilot course that included 8 of the 10 modules was held in Washington, D.C., from October 22 to November 21, 1997. This pilot course offering is the focus of the present evaluation. The complete program on health sector reform and financing contains 10 modules. Not all modules were offered during this core course. Modules 1 - 7 and Module 10 were offered, including 2 offerings of Module 6. Pre course demand led planners to offer a second session of Module 6. Modules 8 and 9 were not complete at the time of this course and were not offered. Module 1 differed from the other modules in that it was designed around a distance learning strategy. For Module 1, participants received course material 6 weeks prior to the course. All 10 Modules, both those offered and those not· offered, contained a set of Learning Objectives. (See Flagship Program on Health Sector • Reform and s·ustainable Financing, EDI, June 5, 1997, ANNEX 3, pp. 16 - 48.) These objectives are stated at a general level. They provide the basis for more specific objectives, including strategies and types oflearning methods used in each of the course modules. A total of 78 participants were enrolled in this pilot course. There were several early departures and a number of participants joined the course late. This resulted in an evaluation assessment based upon a maximum of 68 participant respondents. The number varied by module and may not have included the same exact group from module to module, and for the overall course evaluation. Final cost calculations for the development and delivery of the course have not been completed at this time because these continue to be submitted. However, a general estimate at this time puts these costs in the range of $1 million. This is a fully loaded estimate that includes World Bank staff time and training materials costs. Of this preliminary cost figure, approximately 40% was shared and 1.5% recovered. Both formative and summative evaluation strategies were used to evaluate each module and the overall course. The formative evaluation consisted of group interviews conducted 5 during each module, end of module summary data, and open-ended suggestions and communications for individual modules and the overall course. This information was provided to course planners and presenters for use in making adjustments to specific modules during their presentation or, when applicable, prior to the second presentation of the module. The summative evaluation consisted of several parts. One part was a set of "achievement" questions used for Module 1, the distance learning module. Systematically selected sets of questions designed to measure mastery of course material were administered at the beginning and at the end of Module 1. For all modules, including Module 1, end of module (EOM) questionnaires were administered that contained both standard questions applicable to all modules and questions specific to each module. Questions about both process and outcomes were asked.. An end of course (EOC) questionnaire was also used to assess the overall course. 6 PART II: COURSE OBJECTIVES Course planners identified a set of objectives for the course generally, and its components. Specific sets of objectives were identified for the overall course and each of the 8 modules presented during the course. These objectives addressed process/implementation issues as well as overall course and module-specific outcomes. Of additional interest was to see how meeting or not meeting these objectives varied by some relevant personal and professional characteristics of the participants. Characteristics considered were World Bank staff or non-World Bank staff; gender; possession of a degree in Economics; years experience in the health care field; and region. In the final analysis, only two of these features, degree in Economics and years of experience in the health care field, were used. Following is a description of the specific objectives of the overall course and for individual modules, broken down by process/implementation and outcome. • Overall Course Objectives Since the individual module's dealt with specific learning and skill accomplishments, the stated objectives of the overall course dealt mainly with process and implementation issues. These addressed the key areas of support and delivery methods. Several outcome objectives also were identified and assessed through the evaluation. Following are the course objectives by major area for the overall course: Process/Implementation Objectives • to treat course topics in sufficient depth • to employ the case-method of learning as a useful learning method • to involve participants actively in the course • to provide special presentations that were useful to participants • to provide a well organized course • to provide a balance in training between "what to do and how to do it" • to present materials at an adequate pace, given their volume • to utilize an evidence-based teaching method effectively • to utilize a case-study teaching method effectively • to provide feedback opportunities useful to participants • to make mid-course changes based upon participant feedback • to provide useful tutoring support • to provide adequate logistical and practical arrangements to participants • to provide adequate social events and entertainment for participants • to provide participants with adequate lodging Outcome Objectives • to provide training which was relevant to participants' work • to provide training which was a worthwhile use of participants' time • to provide training which _met participant's expectations • to provide training which participants would recommend to others 7 Module-Specific Objectives Each of the modules presented during the course contained a set of common objectives, as well as a set of objectives specifically related to each module's material. Following are a list of those common objectives: Common Module Objectives • to provide background materials/papers whose content were relevant to participants • to provide background materials/papers that were clear to participants • to provide country-based case studies that were relevant to participants • • to utilize the case method of learning effectively • to provide trainers that were knowledgeable about their respective topics • to provide trainers who would answer participants' questions adequately • • to provide trainers who were clear in their presentation of materials • to treat each topic in sufficient depth • to provide training that was relevant to participants' work • to provide training that was considered useful by participants Module-Specific Objectives Module 1's objectives included only part of the common set. This Module differed from the others in that it utilized a "distant learning" approach to training. That approach required an additional set of objectives to capture the overall intent of that course. Module 1 also specified as one of its key objectives a "cognitive" gain in information about key module-related concepts. In addition to the common objectives, Modules 2 and 10 contained several objectives specific to their content which were not features of the other modules, such as the "Policy Maker" software package used in Module 2. • Additionally, Modules 2 - 7 specified certain knowledge/inf~rmation where it expected to see pre/post gains. Following is a list of the objectives _specific to each of these three Modules. Module 1 • to teach key concepts on health sector reform to participants through distant learning • to prepare participants for the course through distant learning • to provide a set of fully explained, key concepts in health sector reform • to provide a set of end-of-chapter questions that would aid learning the key concepts • to provide adequate and comprehensive distant learning materials to ensure appropriate course preparation • to provide Module texts of adequate length • to provide enough time for participants to prepare prior to arriving for the course • to provide adequate review time increase "cognitive" understanding of participants on module-related concepts • to _ 8 Module 2 • to provide adequate training on the "Policy Maker" software program • to increase pre/post module knowledge/information gain on the key substantive topics Modules 3- 7 • to increase pre/post module knowledge/information gain on key substantive topics Module JO • to provide substantive sessions on Program Planning, Implementation and Financial Management • to provide substantive sessions on Human Resource Management • to provide substantive sessions on Managing Quality of Care 9 PART III: EVALUATION DESIGN AND METHODS Research Designs In order to determine the degree to which the program objectives were met, a multi- method evaluation approach was used. This approach consisted of both formative and summative evaluation design strategies. The purpose of the formative evaluation was to help meet the overall and module-specific objectives by making mid-pr.ogram changes in the content and delivery of course materials and information. Information for making these changes came primarily from two sources, group interviews held approximately mid-way through each module and suggestions obtained on a continuous basis. While the suggestions were completely open-ended, allowing for maximum flexibility in the topics addressed, the group interviews were more structured. A course interview guide was used by the Senior Evaluator to direct the issues and content covered during each group interview. The following topics were covered during each group interview: • pace of the course • opportunity to ask questions • friendliness of the learning environment • • clarity of trainers' presentations • feeling of involvement in the course • suggestions to Trainers for improving the course • amount of time allowed for presentation • view of background papers and other.reading materials • balance of the Module between theory, evidence and exercise All Module sessions participated in the mid-course interviews except Module 1. Module 1's special status as a distance learning module with only 2 days of review made it less amenable to a mid-course review. The sumrnative evaluation employed two basic designs, a pre/post test design and a post test only design, to assess the process and outcomes of the course. Pre/post designs were used in three parts of the course. First, this design was used to assess the degree to which participants . mastered basic concepts presented in Module 1, the distance learning module. For this design, a set of 40 questions were developed by the course planners and module presenters which captured the basic ideas and concepts of the module. A systematic, though non-random, method was used to divide the questions into two sets, one set to be .administered at the beginning of the course and the other at the end of the course. On the first day of Module 1, participants were asked questions from the first set in a group ~etting using a computerized touchpad. The percentage of correct answers for each questions was shown to the audience after each response in a computerized graphic. 10 On the last day of the course, the second set of questions were asked using the same format. This group of questions included the basic selected set plus five randomly selected questions from the pretest set administered on the first day. The subgroup of previously asked questions was included to see what effects testing may have had upon learning basic course concepts. In order to assure anonymity for participants, no individual information was collected on respondents for these tests. Results were only made available at an aggregate level for each question. A second type of pre/post test design was used on all module sessions, except for Module 1, to assess the degree to which participants gained an understanding of basic course concepts. This was a hybrid type of pre/post test design, in that it asked participants at the end of the course to rate their level of understanding of key concepts before taking the course (pre) and after taking the course (post). It is important to note that both pre and post ratings were taken at the same time, at the end of the course. Disadvantages of this design are that it relies on recall for assessing the pre-test status of the participant, and it uses self-assessment of participants' results. One strength, however, is that by asking for this information after the course, participants have more reliable information with which to assess their pre-course understanding of the key concepts. Thirdly, a modified pre/post test design was used to assess course expectations among participants. On the first day of the course, participants were asked to list their two primary expectations for the course. These were open-ended question that allowed for maximum flexibility in responding. The end of course questionnaire administered on the last ~ay of the course asked a direct question as to what degree participants felt that their expectations for the course had · been met. Participants were asked to refer to the expectations they listed at the beginning of the course when answering this question. A post-test only design was used for the overall course and each module to assess process and outcome questions. This strategy consisted of an end-of-course or end-of module questionnaire administered to participants. The questionnaires were designed to ask questions consistent with the stated course objectives. Questionnaires consisted of open- ended and structured, closed-ended questions. Most of the struc~ed questions used a 6- point scale. The 6-point scale was used to measure degree or extent, and was modeled after scales commonly used in other EDI and WB evaluations. Several of the questions on the Module 1 questionnaire used a 3-point scale. Data Collection Methods Data for both the formative and summative evaluations were collected systematically through a set of structured and unstructured instruments. These consisted of the following: • Cognitively based test questions: Two sets of test questions on .the contents of Module 1 were administered to participants. The questions were formulated by the 11 course planners and selected for their content relevance by staff. A total of 40 questions were selected. These questions were divided into two groups by selecting every other question for assignment to each group. The result was two "comparable" groups, although the lack of random assignment meant that some systematic differences could not be ruled out. One group of questions was administered at the ·beginning of the course, prior to the beginning of the Module 1 review. The second set was administered 4 weeks later after all modules had been presented. Included in the second set of questions were 5 randomly selected questions from the first set. These questions were included as a means of determining how much information was learned under optimal conditions, where participants had received the questions and correct answers previously. Questions were administered to participants as a group, using an overhead projector to present each questions. An electronic touch-pad was used to record participant answers ·and quickly analyze the number of correct responses at the aggregate, group level. Individual level responses •could not be determined. This procedure was introduced in order to insure anonymity to participants. The result, however, was the inability to analyze these data at the individual level. • Expectations questionnaire: A questionnaire was administered to all participants at the beginning asking them to list their two most important expectations for the course. This was an open-ended questionnaire that permitted respondents to express these in their own particular terms. .Later, these responses were entered into a word processing data base and content analyzed. • Suggestion boxes: Suggestion boxes were strategically located throughout the common convening area of the training facility. Blank cards were provided and participants were encouraged to utilize these as a means of providing information on issues about the course they viewed as important. Suggestions were routinely collected twice daily. The results were provided to project staff as they were received. • Staff contact: . Participants were encouraged to present complaints and recommendations directly to staff at any time during the course. The Senior Evaluator was identified as independent of the course planners, and who would maintain the anonymity of participant responses. Feedback to the evaluator from participants was conveyed daily to course planners and presenters. • Formal in-course feedback sessions: Formal in-course feedback sessions were conducted by the Senior Evaluator for Modules 2 - 7. These sessions were not used for Modules 1 and 10. Th~se consisted of group interviews conducted by the Senior Evaluator after approximately I 1/2 to 2 days of presentation. A structured interview guide was used to cover key questions about performance and content issues for each module. Responses were summarized and provided to course staff and presenters. The intention of this feedback was to allow for adjustments to be made in mid-course. 12 • Structured questionnaires: Structured assessment questionnaires were administered by the Senior Evaluator at the end of each module and at the end of the course. These questionnaires contained. primarily closed-ended questions with a scaled response set. • A set of common questions was included in each questionnaire to measure features of each module that could be used · to compare it with other modules. Additionally, questions specific to the content of each module were included. These questionnaires were composed on scanable forms for easy processing. Analytic Methods Analytic methods varied by the type of questions that were asked. Information from suggestion, verbal feedback, results from the que$tion of participants' course objectives, and the in-course feedback sessions were reviewed, their content analyzed and· summarized before the information was relayed to course staff and trainers. This type of content analysis was suitable for the unstructured responses and the need to relay this information quickly. The open-ended questions contained in the various questionnaires were not analyzed for this report because of the time required to process this large volume of open-ended responses. An expanded final report is planned by EDIES that will include analysis of this additional information. Methods. used to analyze the data collected from·. the structured, scaled questions consisted mainly of a set of standard, descriptive statistics. •Respondent frequencies and percentages were computed for each· question category. The arithmetic means and standard deviations were also calculated and used·. as summary indicators of response levels. · We used two general categorical breakouts based upon the results from the demographic/descriptive questions asked .of participants. These were, (1) years of experience and (2) whether participants had a degree in economics. Dichotomous groups were developed (or each of these attributes and results calculated for each of these groups as part of our analysis. A fuller description of these groups and the rationale for their selection is presented in the results section. Most of the structured questions utilized a 6-point scale similar to those used in previous WB and EDI evaluations. The lower range of the scale, 1 - 3, represented an area of failure to meet expectations, whereas the upper range, 4 - . 6, showed meeting and exceeding these expectations. An indicator of course success commonly used in previous EDI evaluations is the percentage of responses in the 5 - (> range. This same indicator was used in this analysis. Here, we applied the criterion of two-thirds falling within this . range as a benchmark for assessing an indicator. In other words, those indicators. where less than 66% of respondents fell within the 5 - 6 range of the .scale were noted as areas where improvements could be made. Since this was the first offering of this pilot core course using a number of innovative methods, it may be appropriate to raise this two- thirds threshold level to three-quarters in later offerings. 13 Study Limitations Several limiting factors are present in this evaluation. These factors are the result of attempting to employ an evaluation in an applied situation in which the intrusive effects of the evaluation effort on the program have to be considered. • While these limitations may be unavoidable, they are important to consider when reviewing study results and drawing conclusions. Following are the major limitations to this evaluation: • The scope of the study is limited to an assessment of initial and immediate effects of . the training. Along with evaluating various process issues of concern, this study ·attempted to measure the results of the training in terms of the effects upon individual participants. The logic for this training is that, ultimately, those trained will bring about change within their respective national health systems. This evaluation is limited to any immediate change that occurred within the I-month training period. Assessing longer range impact must wait for sufficient time to elapse so that those trained can begin to apply what was learned. Questions around this aspect of the training effects could be addressed in a follow-up to this initial evaluation. • Some information was lost because of limitations on the use of personal identifiers. In order to ensure confidentiality and convince participants that their identities were being protected, each questionnaire was completely anonymous. This meant that responses between modules and forthe overall course could not be linked together for any individual participant. Furthermore, it was not possible to utilize the information available on each participant from his or her application. Much more information •would have been available to this evaluation had it been possible •to link question responses and personal data for each individual. • Many of the indicators of level of achievement and change in knowledge/skills relied upon participants' subjective self-assessments. These may not be reliable indicators to assess either of these outcomes. The true level of achievement, or the true level of change in knowledge or skill may be different' from the participant's perception. A more objective measure, similar to that used in Module 1's touch-pad session, would provide a more valid indicator of change or achievement. • There may be a problem with the validity of the standard 6-point scale used to assess course outcome and performance. Several problems are associated with this scale. One difficulty is that the. use of an even-point ·scale eliminates a true mid-point, forcing responses in one direction or the other. A second problem with this scale is its inability to produce results that correspond to a normal distribution across the scale's points. Almost all responses cluster at the high end of the scale. This produces averages (means) that appear to be well above the mid-point of the scale's range. It gives a false and possibly misleading result where the appearance of a high score, well above the scale's midpoint, may simply be an artifact of the lower end of the scale being invalid and not used. 14 • The limited range of the scale used by respondents makes results overly sensitive to outliers. This is compounded when the number of participants is also low. Interpretation of measures of central tendency, such as the arithmetic mean, should consider for this sensitivity. In this study, where the lower end of the scale is used infrequently, the main effect will be when a low score is recorded. Then, the results will be a relatively significant reduction in the average (mean) score for that particular module. The effects are greater when fewer participants are involved. It may be misleading to judge a module by a single measure when a lower mean score may simply reflect one or two outliers in the form of low scores. It may be more useful in judging a module's performance to look at several indicators when judging its overall performance, such as the mean, standard deviation, and percent in the 5 - 6 range. • The touch pad session used in Module I provided a much stronger, more objective measure of participants' achievement than the self-assessment measures used in the other modules. Some aspects of this test, however, should be considered when interpreting the results. One limitation is that the "test" questions used were never validated, but simply developed by the module's authors and selected by staff as indicators of achievement. .This does not mean that they were not valid indicators, only that there is no empirical basis for using them as such. This situation is not uncommon in training and educational programs. A second limitation is the non- random selection method used to divide the questions between the pre and post tests. While this may have produced two equally difficult groups of questions, there is still the possibility that a systematic difference between the groups existed and affected the results. Random assignment would have eliminated thi~ possibility. A final limitation is in the administration of the questio~s. It appeared to the evaluator from his observations that some consultation or collaboration occurred between participants, especially during the pre test. This may have produced an inaccurate result, indicating a higher score than would have been obtained had this testing been monitored and controlled more closely. These limitations should be considered when interpreting the different Tesults obtained frop:i the various sources used in this evaluation. However, it should also be noted that the evaluation methods used, as well as the course structure and content, were "pilots." Much of what was learned in both areas can be used in future offerings, and evaluations of these offerings, to produce better products. 15 PART IV: EVALUATION RESULTS This section reports findings from the various evaluation components described previously. Most results reported are from the closed-ended, structured response questions used throughout the course evaluations. The following sections report results for the major components of the core course. IV.A: PARTICIPANT DEMOGRAPIDCS Demographic information was collected on those participating in the core course. This consisted of personal and professional information, ap.d the region or area in which the participant works. The personal and professional information were collected to provide a valid profile of participants, using a number of relevant indicators. These indicators were (1) whether the individual worked for the World Bank , (2) gender, (3) whether the individual had a degree or certificate in economics, and (4) years of experience in the healthcare field. Figure I, Parts A and B, show the results of these demographic breakdowns. FIGURE 1: Demographics of Flagship Course Participants • A. PersonaUProfessional Characteristics • 100% - 91% 90% .j. ., 80%.: j 70o/o ~ °2 60% l. &. so% l i 40'/, i ti 30% "I'- 20o/o l + I 10o/o .!. I Oo/, -i---"'-+- Sta!I Non Female Male Non Degree <10 >10 StaW Dogoo Bank Staff Gender Econ. Degree Healthcare Ex. B. Regional Distribution 100'/4.,-----,-,--,,-----,.,-.....,...,.........,...-----,--,,,..-----,----,--,....,------.,..., 90% - ~ 80% i 70% g 60% t 50% & 40% 0 30% ';ft 20% 10o/, SUb Sahara Africa MVA LAC S.Africa EAR ECA 16 Part A gives the personal and professional distribution of participants. It shows that most participants (91 %) were not employees of the World Bank and the group was largely (79%) male. The range was much narrower on the remaining two features. Clearly, more (63%) participants lacked degrees or certificates in economics. However, those with such a degree or certificate made up nearly 40% (37%) of the group. In terms of participant experience in the healthcare field, the two groups were nearly equal. Slightly under 50% of participants (49%) had less than 10 years experience in this area, while slightly over 50% (51%) had 10 years or mote experience. These results suggest that the two most productive variables for further breakouts of the results would be those with and without an economics degree, and those with less than 10 years experience in the healthcare field and those with 10 years or more experience. This doe_s not mean that gender or World Bank employment are unimportant. However, the low number of females (21 %) and World Bank employees ((9%) in the participant population makes it difficult to use these results as group indicators. Part B shows the distribution of participants by World Bank regions. As is obvious from these results, the two biggest areas were Sub-Saharan Africa and South Asia. Together, they made up nearly half (49.3%) of the group. Participants from Latin America/Caribbean and East Asia were next in size, each making up 16.4% of the total group. They were followed by Europe/Central Asia with about 12% (11.9%) of the group. The smallest number came from the Middle East/North Africa area, accounting for only 6% of the total group. IV.B: FORMATIVE EVALUATION RESULTS Primary data for the formative evaluation were obtained from group interviews held in the training, usually at the middle or end of the second day. Modules 1 and 10 were not included in this phase of the evaluation because of logistical limitations. A structured interview guide was used by the Senior Evaluator to address specific aspects of each of the modules. The objective was to obtain information that could be used to identify problems with an individual module's content or conduct. This information·was then to be used for making "mid-course corrections" to the training. Fallowing each interview session, the results were reviewed and summarized in a memorandum to the course task manager. The information was shared with other EDI staff and module trainers. Discussions were held between EDI staff, including the Task Manager, and the trainers to determine the need for any mid-course change and the strategies for accomplishing that change. The evaluation feedback prompted some change in all 7 of the modules included in the formative evaluation. Participants identified a number of problems with the various modules, which were subsequently addressed by trainers and EDI staff. There were two problems that were consistently identified by participants for all of the modules addressed. One was the volume of reading required in the evening in order to prepare for the next day. These assignments were seen as overly burdensome by participants. Participants became upset when these materials were not used subsequently in the module. Second was the perceived lack of 17 relevance of the subject matter or the case examples used in the modules. Participants consistently raised the point that the material or examples had no relevance for their country. Both trainers and EDI staff reported that changes were made as a result of this information. The effectiveness of this method was assessed with two questions contained in the End of Course questionnaire. Participants were asked if they found the feedback sessions useful and if they felt they felt that these session resulted in changes being made. Results from these questions are shown in TABLE 1. Table 1: Participants' rating of evaluation feedback sessions Rating of Following course items Questions Were Sessions Did Sessions cause ·useful? change? All Participants Mean(SD) 4.81(0.78) 4.63(1.02) %5 or6 66.7 61 Range, Min 2 1 Range, Max 6 6 N 63 59 Economic Degrees Mean(SD) 4.92(0.72) 4.73(1.16) %5 or6 70.8 68.2 Range, Min 4 1 Range, Max 6 6 N 24 22 No Economic Degr~es Mean(SD) 4.74(0.83) 4.58(0.94) . %5 or6 63.2 58.3 Range, Min 2 2 Range, Max 6 6 N 38 36 10 Yrs of experience or less Mean(SD) 4.59(0.64) 4.67(0.70) %5 or6 51.9 62.5 Range, Min 4 3 Range, Max 6 6 N 27 24 More than 10 Yrs of experience Mean(SD) 4.97(0.84) 4.60(1.19) %5or6 77.8 60 Range, Min 2 1 Range, Max 6 6 N 36 35 18 TABLE 1 gives the results of the two 'questions for all participants, and broken down by the major descriptive categories of (1) economics degree ·vs. no economics degree and (2) 10 years or less experience vs. more than 10 years experience. Reported is the arithmetic mean and standard deviation, the percent who fell into the top ranks (5 and 6), the ranges and the number of participants. The results suggest that in general, participants felt that these feedback sessions were useful .and did bring about some change. There is also evidence that there was a sizable number who felt otherwise, especially within some of the subgroups considered. The mean scores for all of the groups examined fell within the 4th rank, which is the lowest rank in the positive range of the 6-point scale. These results are generally consistent across the four subgroups considere_ d. There are some noticeable differences between these groups, however. Those respondents with degrees in economics and those with more experience in the healthcare field were more inclined to feel these feedback sessions we~e useful. When asked if they felt it produced change, • those with Economic degrees/certificates were still -more likely to respond in the affirmative. However, those with the most experience in the healthcare field were least likely to believe that change had occurred. IV.C: SUMMATIVE EVALUATION RESULTS Course Expectations Each participant was asked to identify two primary expectations for the course. These results were reviewed and catalogued into 7 major categories. An 8th category was added for all "other" responses. Figure 2 shows the results of this analysis. According to these results, almost half of the stated expectations clustered into two groups. One quarter (25%) of responses were that participants wanted to obtain a greater understanding of health sector reform. Nearly as many (21 %) said they wanted to learn more about finance concepts and practices. These two areas are interesting for ~eir co11trast. Understanding reform is at a much more general level than learning specific concepts and practices about financing healthcare systems. • Participants ,seem to be looking for · both .the ability to understand and diagnose reform at a general level, but also recognized the need for some very specific tools with which ·to accomplish this . goal. · The other categories of expectations also appear to be along the same lines of general goals and specific tools. Whether these expectations were met was a question also addressed in this evaluation. A question was asked on the overall End of Course questionnaire about these expectations. Participants were asked to refer to the expectations they had listed at the beginning of the course and answer the question, "To what degree did the course fulfill your expectations?" TABLE 2 shows the results ohhis questions for all participants and broken down by economic training and years of experience. The results show that, for the most part, these expectations were met or exceeded. Out of a maximum score of 6, the mean score for all participants was 5.22, with slightly over 89% (89.1%) scoring in the 5 to 6 range. This result did not change greatly when broken out by economic training. Those reporting having a degree in economics had a mean score of 5.24, which was not 19 Other Economic Concepts Assessment 3% 11% 10% Resource Allocation 5% Ill Economic Concepts Experiences of Reform ■ Experiences of Reform 15% □ Finance Concepts & Practices Policy Concepts & Practices El Understanding of Reform 10% ■ Policy Concepts & Practices 13 Resource Allocation ■ Assessment eOther Finance Concepts & Understanding of Practices 21% Reform 25% FIGURE 2: Responses to Questions ofCourse·Expectations -substantially different from the mean of 5.21 for those who reported not having a degree in economics. A higher percentage of those with an economics degree, 92.0%, fell into the 5 to 6 range than those without the economics degree, 86.8%. The most notable difference was between those participants reporting 10 or more years of experience in the healthcare field and those with less than 10 years of experience. The more experienced group reported a higher mean score, 5.32 vs. 5.07, ahd a larger percentage in the 5 to 6 range, 91.9% vs. 85.2%. While large majority of participants felt that their expectations were met or exceeded, the level appeared slightly higher for those with greater experience in the healthcare field. TABLE 2: DEGREE FLAGSHIP COURSE PARTICIPANTS FELT EXPECTATIONS MET Type of Participant Mean S.D. Min. Max. - % Value Value 5 or 6 All Participants 5.22 0.68 3 6 89.1 Economists 5.24 0.72 3 6 92 Non-economists 5.21 0.66 4 6 86.8 Most Experienced 5.32 0.63 4 6 91.9 Least Experienced 5.07 0.73 3 6 85.2 20 Ove_rall Course Evaluation Evaluation of the overall course was based upon the results of an End of Course (EOC) questionnaire administered to participants on the final day. This questionnaire was structured to capture two primary dimensions of that course, process and outcomes. This was an opportunity to conduct a comprehensive assessment of the overall course using participant perceptions. By isolating activities into these two areas, course planners and presenters could examine and identify areas where improvements may best be introduced. TABLES 3 and 4 show the results from the . EOC questionnaire for the overall core course. TABLE 3 presents finding from questions on processes and outcomes using the standard WB 6-point scale. A number of process related questions asked also used a 3-point scale that ranged from too little to too much. Those process questions using the 6-point scale dealt mostly with implementation, while the 3-point scale items addressed course content. For the implementation items, participants gave generally favorable ratings, but they were noticeably less favorable than those given to the outcome items. Only one of the items, that measured the degree participants felt involved in the course, had a mean score above 5.0. All others fell in the 4-point range. Least favorably regarded was the usefulness of the tutor, followed by the effectiveness of the, in-course feedback sessions. There was no clear pattern among these items between those with ·and without degrees in Economics. However, participants with more experience in the healthcare field tended to give more favorable ratings to these implementation itenis than their less experienced counterparts. 21 TABLE 3: OVERALL FLAGSHIP COURSE EVALUATION QUESTION RESULTS OVERALL EVALUATION QUESTIONS All Participants Economic Degrees No Economic Degrees 10 Years of Experience or Greater than 10 Years of Less Experience Mean I %5 I Range I Mean I %5 I Range I Mean I %5 I Range I Mean I %5 I Range I Mean I %5 I Range I Rating of the Followlng Course Items: S.D. I or 6 IMin IMax I N S.D. I or 6 I Min !Maxi N S.D. I or6 I Min !Maxi N S.D. I or 6 IMin IMax I N S.D. I or 6 I Min IMaxi N. Degree course relevant to work 5.50 100 5 6 64 5.48 100 5 6 25 5.05 100 5 6 38 5.30 100 5 6 27 5.65 100 5 6 37 0.50 0.51 0.51 0.47 0.48 Degree course topics treated In-depth 5.00 87.5 4 6 64 5.00 88 4 6 25 ~.00 86.8 4 6 38 4.85 81 .5 4 6 27 5.11 91.9 4 6 37 0.50 0.50 0.52 0.46 0.52 Degree course worthwhlle use of time 5.31 90.6 4 6 64 5.40 96 4 6 25 5.24 86.8 4 6 38 5.19 85.2 4 6 27 5.41 94.6 4 6 37 0.64 0.58 0.68 0.68 0.60 Degree case-method useful 4.98 79.4 2 6 63 4.92 70.8 3 6 24 5.00 84.2 2 6 38 4.93 ,70.4 2 6 27 5.03 86.1 4 6 36 0.79 0.83 0.77 1.04 0.56 Degree participant felt sufficiently Involved In course 5.20 89.1 3 6 64 5.16 84 3 6 25 5.21 92.1 4 6 38 5.07 88.9 3 6 27 5.30 89.24 4 6 37 ) 0.67 0.80 0.58 ' 0.68 0.66 Degree special presentations/events useful 4.82 71 2 6 62 5.00 79.2 3 6 24 4.70 64.9 2 6 37 4.69 61 .5 3 6 26 4.92 77.8 2 6 36 0.82 0.78 0.85 0.74 0.87 Degree tutor useful 4.36 60.7 1 6 28 4.67 66.7 3 6 9 4.21 57.9 1 6 19 4.31 61 .5 1 6 13 4.40 60 2 6 15 1.22 0.87 1.36 1.32 1.18 Degree evaluation feedback sessions useful 4.81 66.7 2 6 63 4.92 70.8 4 6 24 4.74 63.2 2 6 38 4.59 51 .9 4 6 27 4.97 77.8 2 6 36 0.78 0.72 0.83 0.64 0.84 Degree evaluation feedback sessions caused change 4.63 61 1 6 59 4.73 68.2 1 6 22 4.58 58.3 2 6 36 4.67 62.5 3 6 24 4.60 60 1 6 35 1.02 1.16 0.94 0.70 1.19 Degree course fulfilled expectations 5.22 89.1 3 6 64 5.24 92 3 6 25 5.21 86.8 4 6 38 5.07 85.2 3 6 ·21 5.32 91 .9 4 6 37 0.68 0.72 0.66 0.73 0.63 Degree participant would recommend course to others 5.55 95.3 3 6 64 5.68 96 4 6 25 5.45 94.7 3 6 38 5.37 88.9 3 6 27 5.68 100 5 6 37 0.64 0.56' 0.69 0.79 0.47 Degree satisfied with course organizers 5.56 93.8 3 6 64 5.60 92 4 6 25 5.53 94.7 3 6 38 5.33 88.9 3 6 27 5.73 97.3 4 6 37 0.66 0.65 0.69 0.78 0.51 Degree satisfied with logistical/practical arrangements 5.55 93.8 2 6 • 64 5.68 96 4 6 25 5.45 92.1 2 6 38 5.22 85.2 2 6 27 5.78 100 5 6 37 0.73 0.56 . 0.83 0.93 0.42 Degree satisfied with social events 5.09 76.6 3 6 64 5.24 84 4 6 25 4.97 71 .1 3 6 38 4.93 63 3 6 27 5.22 86.5 3 6 37 0.83 0.72 0.88 0.92 0.75 Degree satisfied with hotel accommodations 5.05 76.3 2 6 59 5.14 81.8 3 6 22 4.97 72.2 2 6 36 4.96 69.2 3 6 26 5.12 81 .8 2 6 33 0.88 0.83 0.91 0.87 0.89 22 TABLE 4: OVERAL~ FLAGSHIP COURSE EVALUATION QUESTION RESULTS %Yes %No N %Yes %No N %Yes %No N. %Yes %No N %Ye %No N s Would you be Interested in returning? 96.7 3.3 60 95.8 4.2 24 97.1 2.9 35 100 0 26 94.1 5.9 34 % % % % % % % % % % % % % % % How did you find: Too About Too N ·Too About Too N Too About Too N Too About Too N Too About Too Little Right Much Little Right Much Little Right ·Much Little Right Much Little Right Much amount of attention devoted to "what to do" 4.7 82.8 12.5 64 4 80 16 25 5.3 84.2 10:5 38 - 0 88.9 11 .1 27 8.1 78.4 13.5 amount of attention devoted to "how to do It" 14.3 , 77.8 7;9 63 12 80 8 25 16.2 75.7 8,1 37 18.5 77.8 3.7 27 11 .1 77.8 11 .1 amount of learning materials 1.6 62.5 35.9 64 4: 56 40 25 0 65.il '34.2 38 3.7 66.7 29.6 27 0 59.5 40.5 ~ amount of attention devoted to evidence-based learning 12.9 77.4 9.7 62 12 80 8 25 13.9 75 11 .1 36 ' 18.5 70.4 11.1 27 8.6 82.8 8.6 amount of attention devote(! to country case studies •12.5 68.9 18.8 64 12 64 24 25 13.2 71 .1. 15.7 38 • 18.5 59.3 22.2 27 8.1 75.7 16.2 -· the length of the Flagship Course 6.3 63.5 30.2 63 8'.., 64 28 25 5.4 62.2 32.4 37 0 63 37 27 11 .1 63.9 25 ;:l . 23 Using the benchmark of 66% or less falling within the 5-6 range, the same two items emerge as areas that may require attention or reconsideration. Usefulness of the tutor was slightly below 61 % (60. 7%). This was consistent across groups, with the exception of those who hold degrees/certificates in economics, where the percent in the 5 - 6 range rose to almost 67% (66.7%). It should be noted that this assessment came :from only 28 participants who reported using the tutor. When combined with those who chose not to use the tutor, these findings suggest that providing such a service may not be attractive to participant in its present form. The other item was the degree to which participants felt that feedback session cause any changes in modules. Here the percent in the 5 - 6 range was at 61%. This was fairly consistent across groups, with those holding degrees/certificates in economics as the only group which was above the 66% benchmark at 68.2%. It also should be noted that for the companion question of whether the feedback sessions were useful or not, those without degrees/certificates in economics and those with 10 years or less experience in the healthcare field fell below the 66% mark. Since the feedback sessions represented one of the primary methods for the formative evaluation, it may be necessary to make it clearer to participants the results of these sessions. Response :from process questions using the 3-point scale to assess the value of course content also showed a generally positive result. According to the results shown in TABLE 4, participants found that these different aspects of the course were adequate, as opposed to being too little or too much. And these results were consistent across subgroups with no clear pattern of differences between these groups evident. There were some interesting findings about the length of the course and its materials consistent with information obtained :from the formative evaluation. One of the most common complaints heard throughout the in-course feedback sessions was that the reading assignments were excessive. For the .question that asked how participants felt about the amount of materials, 36% of the group indicated that it was "too much." And there was a clear difference between the subgroups on this question. Participants without economic degrees and those with less experience in the healthcare field were even more inclined to believe that these materials were excessive. This may be explained by the fact that much of course material dealt with economic issues. A related question asked about the length of the course. Thirty percent (30%) of respondents felt that it was too long. Results :from these two questions do suggest that there was concern among participants that the amount of work and time required by the course may have been too much. Course outcomes consisted of the degree to which participants felt that the course was relevant to their work, was a worthwhile use of their time, fulfilled their expectations, and would recommend the course to others. Additionally, a "yes/no" question was asked on whether the participants would be interested in returning, reported in TABLE 4 .. For all of these outcome questions, the ratings were generally in the high range, suggesting a high level of satisfaction with the course. The mean scores on the 6-point scale were well above 5.0 on all four items. This strong rating was consistent across the four sub- groups examined. There was, however, a discernible pattern among these groups. 24 Participants with economic degrees and those with the most experience appear to rate the outcomes slightly higher than those in the other groups. The "yes/no" question of whether participants would be interested . in returning had similar results. "Yes" responses were well above 90% for the general group and each subgroup. Comparatively, however, the results from the subgroups showed the opposite pattern from those responses on the 6-point scaled outcome questions. On this question, participants without degrees in economics. and those with' less experience in the healthcare field tended to respond more in the affirmative. It is quite possible, in interpreting these results, to conclude that economists and those most experienced valued the technical aspect of the course more highly, but may have exhausted how much more they could gain from further exposure to the course. Module 1 Course Achievement Module 1 is designed to introduce the concepts and tools of health economics. -It is intended to assure that participants have at least aminimal understanding of the necessary concepts and terms. The · module differed from the other modules both in its teaching method and evaluation measures. It used a "distance learning" method of instruction, where participants were provided materials well in advance of the course, and asked to prepare for the course qy reading and studying these materials. Days 2 and 3 of the Flagship Course consisted of classroom instruction and exercises designed to review and supplement the basic materials provided participants prior to their attending the course. Participants were asked two sets of "achievetnent'' questions designed to measure the mastery of information associated with Module 1's subject matter. One set of questions was asked the first day of Module 1 and the second set was asked on the last day of the course. The objective was to compare the number of correct answers before and after the course. Some additional information was also collected on the amount of time spent and module material covered by each participant prior to attending the course. Questions were asked in a group setting, and were recorded and analyzed at the group level. An electronic touch pad was used to record the answers anonymously. TABLE 5 gives the result of these questions and answers. TABLE 5 shows the percent .correct answers on the first and second te.sts for 5 repeat questions and for the overall groups of questions. For the 5 repeat questions, it is apparent that there was a sizable increase in the percent of correct answers from the first . to the second test. This was expected, since the same questions were used, the correct answer for each quest_ ion was given to the group immediately following the first test, and the terms and concepts were used and explained . during the course's . subsequent modules. What is most significant about these results is that, even under the most favorable conditions, from one-quarter to just over one-half of the participants still could not identify the correct answer. In one of the repeat questions, Question #2, almost 55% of participants still got the answer wrong, even after having been given that question with 25 the correct answer approximately 4 weeks earlier. The fact that participants may not be retaining the information in the manner expected is supported further by the results from the two sets of non-repeated questions. On the first set of questions, 58.2% of the answers were correct. In the second test, for the non-repeated questions, the percent correct was 63.2%, an increase of only 5 percentage points. TABLE 5: DIFFERENCE IN REPEAT QUESTIONS AND OVERALL FOR 1ST AND 2ND TESTS - MODULE 1 1st Test 2nd Test %Correct %Correct Repeat Question #1 45.3 69.4 Repeat Question #2 21.9 45.2 Repeat Question #3 32.8 61.3 Repeat Question #4 56.3 75.8 Repeat Question #5 53.2 74.2 Overall 58.2 63.2 Previously raised issues as to the validity of these questions and their administration make it difficult to argue that this is a "true" indicator of what was learned in this module and in the course. But the fact that the pre/post test may have some face validity, together with results that are heavily skewed in one direction, should alert us to the strong possibility that the actual learning objectives may have fallen short. When these results are contrasted with the self-reported learning achievement gains, the difference is . remarkable. Two possible explanations for this difference are that participants learned much less than· they believed they learned (as measured by the questions they were asked), or that different information was learned from what was measured and what the course planners felt was important. Evaluation Questions Module 1 used a set of questions in an End of Module (EOM) questionnaire administered on the last day of the module as another means of evaluation. A number of the questions used were common to all of the course modules, whereas others were specific to Module 1 and its use of the distant learning method. Most questions used the standard 6-point scale, although three of the specific questions used a 3-point scale that ranged from "too short" to "too long." All of the questions for this module dealt with process issues. Background information on participants was not collected, so group breakouts were not available for this module. Results from these questions are shown in TABLE 6. 26 TABLE 6: MODULE 1 EVALUATION QUESTIONS Rating of the following course items: Mean S.D. Min. Max. % N Value Value 5or6 Degree material was clear 4.62 0.83 3 6 61.9 63 Degree material was useful 4.87 0.94 2 6 66.7 63 Degree concepts well explained 4.56 0.88 3 6 55.6 63 Degree examples/illustrations useful 4.33 0.82 2 6 44.4 63 Degree end-of-chapter questions useful 4.56 1.03 2 6 50.8 63 Degree dist. learn. material comp.rehensive 4.69 1.16 1 6 65.6 61 Degree trainers clear 4.31 0.93 2 6 40.3 62 Degree trainers adequately answ. ques. 4.49 0.78 3 6 47.6 63 Degree 2 day review worthwhile use of time 4.54 0.84 2 6 52.4 63 Number Number Number Too short About right Too long TOTAL N % N % N % How did you find length of modules? 1 1.6 42 68.9 18 29.5 61 How did you find time to read prior to arrv. 27 44.3 31 50.9 3 4.9 61 How did you find time for review (2days)? 19 32.2 40 67.8 0 59 According to these results, most participants viewed these different aspects of the course favorably. But there was some variation among these items. For items measured with the WB standard 6-point scale, two related items stand out as having received relatively low ratings. These are the degree to. which examples or illustrations were useful and the degree to which trainers were clear.. The mean scores for these two items were the lowest, each had a lower value of 2, and their respective percentages which fell in the upper 5-6 range was 44.4% and 40.3%. Together these indicators suggest difficulty by participants with two key presentation media, the trainers and illustrative aids. The actual material covered, however, was better received. When asked to judge the clarity, usefulness and comprehensiveness of materials used in this module, participants ranked this highest among the items. When using the benchmark 66% for rating falling in the 5 - 6 range, this module did not do well. Only one item was above this standard, and this. was the degree to which material was useful, which scored 66.7%. All other items were below this level, with several falling into the 40% range. It appears from these results that all aspects of this module could be reviewed and consideration given on how to improve their ratings among participants. 27 Three items used a 3-point scale to assess some module-specific issues. This scale ranged from "too short" to "too long," and measured participants' views on the module' s length, the amount of pre-arrival perpetration time and the amount of post-arrival review. There was some evidence from the overall course evaluation that participants were concerned with the amount of work assigned, the time allowed to complete this work and the overall length of the course. These results are consistent with those findings. Although a solid majority, 68.9%, felt that the length of the course was appropriate, nearly 30% (29.5%) believed it lasted too long. Results from the questions on the amount of time made available to participants were skewed in the opposite direction. Only a bare majority, 50.8%, thought the pre-arrival time sufficient to read assigned material, while an almost equal number, 44.3%, said it was too short. Post arrival review time did fare better. Almost 68% (67.8%) said that the amount of time was adequate. However, more than 30% (32.2%) believed it too short. These results may be significant for how the distant learning method can be adjusted to be more effective in EDI training. Module2 Module 2's evaluation used the results of two sets of data obtained from participant questionnaires administered at End of Module (EOM). The first group of questions was a set of core questions used across most other modules, and dealt with a number of the • basic processes and outcomes relevant across modules. Table 7A(See Annex) shows the results of these questions. For the most part; these results show a positive assessment of the module by participants. Mean scores for all participants are in the upper 4's and S's, well within the positive range of the scale. They are some differences, however, in ratings for different aspects of the module and among the various subgroups. For the process indicators, overall participants tended to give higher ratings to the performance of the presenters than to the supporting materials and teaching methods. They also gave a high rating to the two outcome indicators, the relevance of the module to their work and whether the course was a worthwhile use of their time. The lowest rating was for the software piece used in this module, "Policy Maker." There were some differences between those who had degrees/certificates in economics and those who did not, although it is difficult to discern a pattern in these differences. For those with different levels of experience in the healthcare field, there was also an observable difference and a pattern to that difference. Those with less experience tended to rate the performance of the presenters higher and the supporting materials lower than those with more experience. Like those with and without degrees/certificates in economics, the groups split on these outcome indicators. According to the 66% benchmark criterion, this module's performance was well received by participants. All but three of the course items fell within the acceptable range. Two of the items -- usefulness of country studies and degree topics were treated in-depth -- were above the 60% level. And, for both items, those participants with degrees/certificates in economics and with the most experience were above the 66% level. The third item, the 28 :"".'.i •. usefulness of the Policy Maker software package, was just over 49%, making it the least favorable part of the modµle. Interestingly, those participants with training in economics rated this item lowest. The disparity in scores between Policy Maker and other Module 2 items suggests a need to review the utility of this software package for participants. Results from the pre/post knowledge self-assessment for Module 2 is show -in TABLE 7B(See Annex). The assessment dealt with concepts specific to this module. In interpreting the degree of knowledge gain reported on these items, it is important to consider the level of the pre assessment mean score. This level affects the · degree of change between pre and post scores. Generally, the pre assessment scores were higher for participants reporting degrees/certificates in Economic~, reflecting the economic content of the course_.· One question on assessing political options showed a higher pre course mean score for those participants without a •degree/certificate in economics, further supporting point that these scores reflect familiarity with economic concepts. There appeared to be a mixed pattern in pre course scores for the two g!oups reflecting different levels of experience in healthcare. The degree, or percentage, change between pre and post course mean scores for those with and without degrees in economics was as expected. Persons with such degrees/certificates had a smaller gain for most items, suggesting that the greatest gain in knowledge came to those less familiar with economic concepts. The results, however, were not as expected when comparing groups with different levels of experience. For most items, persons with the greatest experienced registered the greatest gain. This was opposite the expected results that those with less experience would register the greatest gain in knowledge. Module3 The Module 3 evaluation consisted of the standard set of evaluation questions, using a 6- point scale, and a set of module-specific pre/post assessment questions, obtained with an EOC questionnaire. Results from the two sets of questions are shown in TABLES BA and BB(See Annex). TABLE BA summarizes responses to the set of standard evaluation questions. These results show that participants gave Module 3 generally high ratings on these indicators. Nearly all of the mean scores were well within the 5.0 range, and the percentages in the upper ranges (5 or 6) was usually 80% or above. The quality of the module rated lowest was the degree to which topics were treated in depth. On this indicator, the mean score fell into the 4.0 range, with 66. 7% of responses falling into the upper ranges. These generally high ratings were consistent across subgroups. When comparisons were made between the groups, some differences emerge. Those participants with degrees/certificates in economics tended to rate the standard items higher than their counterparts, although there are a number of exception to this pattern. Similar differences can be Seen between those with varying levels of experience. Participants 29 with .less experience tend to rate the items higher than those with over 10 years experience in the healthcare field. There were some exceptions to this general pattern. Results from applying the 66% criterion show that the items in this module were viewed positively by participants. All items were above the base level, with the exception of adequacy of the tutor, and the small numbers associated with response to this item make these results difficult to interpret. • When this indicator is viewed across subgroups, one item stands out for differences between groups. On the item, the degree to which topics are treated in-depth, there are. some large differences between both the economic groups and those with different levels of experience. For those with degrees/certificates, the percent in the 5 -6 range falls to 50%, while those without these credentials rise to over 78%(78.6%). There is a similar pattern for those with different levels of experience. Respondents reporting 10 years or less experience in the healthcare field see this indicator fall to under 55% (54.5%), while those with greater experience rise to over 83% (83.3) .. These results suggest that substantive aspects of how the module's material was treated is perceived very differently by different participant groups. It may be more effective to address these groups differently in future presentations of this module. For the pre/post question results, shown in TABLE 8B, the overall group entered with relatively low pre-course mean scores. All scores fell into the 2.0 and 3.0 range, precipitating some large gains in the post course scores. Almost all items gained over 50% in their mean scores between pre and post course measures. The most remarkable change was in how well participants comprehend key issues related to medical savings accounts. This item started at a low pre course mean score of 2.0, but rose by a over 112% for the post course mean score. These patterns persisted across ·sub groups. Participants with training in economics had appreciably higher pre course mean scores for almost all items and posted a lower gain in their post course mean scores that those without this training. Similarly, participants with the most experience had higher pre course scores and gained less in their post course mean scores that those with fewer years experience in the healthcare field. The remarkably high increase from pre to post course mean scores for the medical savings account item also persisted across subgroups. All four subgroups began with a relatively low pre course mean score and increased by over 100% to post course mean scores. The same pattern prevailed, where those without training in economics and those with the least experience showing the greatest gain: Possible explanations for these large increases the difficulty and/or the newness of the material covered in the module. It suggests that participants knew little about the topic and felt that they gained a great deal as a result of the training. Module 4 TABLE 9A(See Annex) presents results for Module 4 from the standard set of evaluation questions. Overall, participants tended to rate these standard items highly. Mean scores were in the upper 4.0 range and the percent scoring in the highest range, 5 - 30 6, was generally above 50%. The exception to this pattern was how well participants felt trainers answered their questions. This item had the lowest mean score, 4.57, with just 42.5% or responses falling in the upper range. There was a strong, favorable response to the question of the course's relevance to participants' work. This item's mean score was highest at 5.10 with over 85% ranking it in the upper range. There appeared be differences between the major subgroups examined. Those respondents with degrees/certificates in economics tended to rate the items higher than their counterparts, those without economics training. Similarly, the more experienced group rated these standard items higher than those with less experience in the healthcare field. One of the largest difference observed was between these two subgroups on the question of the course's relevance to participants' work. Those with greater experience seemed to value the course for its relevance more than those with less experience. Results from applying the 66% criterion show a number of items that fall below this range. Of the five items which fall out of the acceptable range, one deals with course content while the remaining four are features related to the presenters.. This clearly suggests that some attention should be given to reviewing the methods and practices employed by presenters in their treatment of module topics. Included in this list of trainer-related topics are the degree trainers appeared knowledgeable of the issues, how adequately they answered questions, their clarity and the degree to which they treated topics in-depth. For almost all of the topics falling below the 66% level, there was a similar pattern of differences between groups. • Generally, participants with degrees/certificates in economics were more positive than the other sub groups. In· four of the five areas, this group rated the topic higher than the 66% level. This may suggest that those participants with a background in economics were .more familiar with the . technical aspects of the module's content, and therefore felt that the presenters did an adequate job in presenting this information. As can be seen in TABLE 9B(See Annex), pre course ratings for specific items was generally low, and gains between pre and post course ratings high. The least gain was registered in the area of analyzing alternative definitions of health equity. The item that appeared to be the most valued for its additional information was understanding approaches to targeting public health subsidies. This item had one of the lowest pre course mean scores (3.05) and also registered the greatest gain from pre to post course in the mean scores, a 67 .2% increase. There were observable differences between the sub groups considered in this analysis. The most consistent differences were between those with and without training _in economics. Those with degrees/certificates in economics had higher pre course mean scores and smaller gains between their pre course/post course scores. This may re:f;1ect the more technical nature of this course and its greater use of economic terms and concepts. The item that showed the greatest gain for the non economic credentialed group was the mechanics of incidence analysis, where the gain was 100% in mean scores between pre and post course measures. There were similar differences between those 31 J with greater and less experience, although the pattern was more mixed. .Those with greater experience in the healthcare field tended to rate their pre course knowledge of the items higher and showed a smaller gain after taking the course. The greatest pre/post course gain when viewing these groups was on the item that dealt with implementation issues relating to public health subsidies, where those with less experience registered a 100% gain. Module5 Results from the Module 5 survey are presented in TABLES JOA and JOB(See Annex). In TABLE JOA, results on the standard items show .ratings consistent with previous modules. Mean scores for the items were mostly in the upper 4.0 range, with the percent in the upper 5-6 range going as high as 81.5%. The least valued features of this module were the trainers ability to answer questions adequately and the degree to which topics s as also were treated in-depth. ,These same two items had been identified in other module_ having the lowest scores. Highest. ratings were given to usefulness of materials and the module's relevance to work. This is an interesting difference, suggesting that the presenters were less well regarded than the information they were covering. Some differences between subgroups was noted. Those respondents with degrees/certificates in economics tended to rate this module higher than their counterparts, although these results were mixed. The biggest difference between the two economic groups was on the rating of whether the course was a worthwhile use of their time. Those with training in economics gave this feature a much high rating, perhaps . reflecting the economic orientation of the module's topic. A similar pattern was observed between the two experienced based groups. . Those with the most experience in the healthcare field generally rated the items higher than those with less experience, although this was not unanimous across all items. The item showing the greatest difference for this group was the degree to which trainers answered questions adequately. Those with greater experience •gave trainer performance in this area a higher rating than the group with less experience. This may suggest that those with less experience required a more extensive-explanation of the topic covered in the module. The 66% criterion show a number of items below that level. A total of five items fell in this range, with three of these items relating to presenter performance. The item which deals most closely with overall module satisfaction, the degree to which the module was a worthwhile use of participants' time, also fell below the 66% level for the total group. Some interesting differences were noted between the two economic degree/certificate groups. Two items showed much high ratings by those with backgrounds in economics. The item which dealt with whether the trainers' presentation was clear saw those with degrees/certificates scoring 72. 7% in the 5 - 6 range, whereas the other group scored only 50% of participants in this same range. Similarly, on the item o( whether the module was a worthwhile use of participants' time, those with degrees/certificates scored almost 82% (81.8%) in the top range, while those without these credentials dropped below 44% (43.8). These differences reinforce the need to consider some of the different 32 requirements of participants with various· technical and experiential background when planning future presentation of this module. Results from the pre/post assessment of specific Module 5 items are shown in TABLE iOB. Participants tended to rate themselves low on the pre course knowledge scale, with most of the mean scores falling into the 3.0 range. The area that ranked lowest in pre course knowledge was being able to identify different approaches to use in dealing with various difficulties in providing health services. Gains reported by the overall group from pre to post course knowledge was substantial. The percentage gain fell mostly into the 50% - 60% range for most items, with some exceptions. These exceptions appear to be those less technical, such as being able to define a basic health services package. There appears to be differences between the sub groups on the pre/post course gain for items in Module 5. It appears that for those with degrees/certificates in economics, their pre course rating were higher across the items and the posted gain between pre and post course lower than their counterparts. This, again, may reflect the economic nature of the module's material. The pattern for those with different ' levels of experience in the healthcare field was more mixed. Both the pre course scores and postcourse gains varied across the items; with no clear pattern of difference between those with more or less experience in the field. Module6 Module 6 differed from other modules. Because of pre-course demand, it was the only module repeated during the course. The two sessions consisted of ,:the same materials and almost all of the same presenters. Session 2 had the advantage of having received some feedback on the module's perceived strengths and weaknesses from the first session. Results from the EOC questionnaires are presented for both sessions in order to allow for some comparison between the sessions. Standard Items Standard evaluation items for the two sessions of Module 6 are reported in TABLES JJA and 12A(See Annex) . The first session's questionnaire results, seen in TABLE JJA, show a solid set of ratings from the overall group. Most mean scores cluster in the upper 4.0 range and the percentage in the. upper, 5-6 range are all above 50%, with · one exception. That exception was the degree to which the case method was useful and appears to be the least valued feature of the session. When comparing the two primary subgroup sets, there appears to be a clear pattern to their ranking for this module session. Those with degrees/certificates in economics give a noticeably lower ranking to these items than their counterparts. This difference may reflect the course content, which seemed less technical and not as focused on economic terms. A similar difference was noted between those with different levels of experience in the healthcare field. Here, however, it appears that those with the most experience gave this session higher ratings. This finding is more difficult to interpret. •It may be that the subject matter was perceived 33 as more relevant for this group and its importance was not as apparent to those with less experience. Viewing the items which fell below the 66% benchmark level, two features emerge as significant in the results. First, it is striking that all but three of the core items fall below the benchmark, and one of these is the degree the tutor was adequate where there was only one respondent. The only features which were judged successful according to this criterion were those relating-to the background papers, specifically their clarity and usefulness. All of the items which dealt with the performance of trainers fell below this • minimum level. Second, there was an exceptionally large disparity between the subgroups, particularly those with and without degrees/certificates in economics. On one topic, the degree to which topics were treated in sufficient depth, those with degrees/certificates scored only 25% in the 5 - 6 range, while those without these credentials scored almost 73% (72.7%) in that range. The same distribution existed for those with different levels of experience, where those with 10 years or less scored only 25% in the 5 - 6 range. The overall pattern of this disparity was more prominent among those with different training in economics than for those with different levels of experience. Results for these same items from the second session of Module 6 is reported in TABLE 12A. The pattern for these results are remarkably similar to those from the first session. One difference is apparent. In the second session, the mean scores are higher across the items. Further, the pattern of differences between subgroups is not as profound and the magnitude · of these difference not as great as was found in the first session. One explanation for this development may be that the presenters and ·planners of Module 6 . were attuned to some of the voiced difficulties with the first session and took steps to correct these for the second session. When viewing results from the second session of this module from the vantage point of the 66% criterion for responses in the 5 - 6 range, clearly there were some significant changes from the first session. Overall, there was a sizabkchange in the number of items falling above the minimum level. Only two items, the usefulness of country studies and the degree topics were treated in-depth, fell below 66%. Further, both of these were near that level at 61.5%. This result contrasts favorably with that from the first session when only 2 of the 10 items examined were above the 66% level. The same pattern of differences between those with and without degrees/certificates in economics persisted into the second session. However, the gap between these two groups narrowed some, indicating a more positive response from those with some training in economics. This change lends support to the ability of trainers to utilize results from an evaluation to make improvements in their programs. 34 Module-specific Items Changes in pre and post course mean scores for specific Module 6 related items, designed to be a self assessment measure of achievement, are presented in TABLES llB and 12B(See Annex). In TABLE llB, results from the first session show that a set of relatively high pre course scores. This may indicate that respondents felt that they knew· much more about these topics than those covered in other modules, perhaps because they appear to be less technical and not as focused on economic terms. These higher scores account for the relatively lower gains, as seen in the percentage change between pre and post course scores. Examining the differences between the subgroups, it appears that in this module those without degrees/certificates in economics tend to have higher pre course scores than their those with these credentials. This, again, may attest to the perception in the content of the course by participants. When viewing the results from the groups with different levels of experience, there appears to be no pattern in the. direction of their observed differences. TABLE 12B gives the results of the pre/post course assessment for the second session. For the overall group, the pre course scores appear slightly lower and the level of gain higher than that observed in the first session. This may be the results of trainers making adjustments to their course content and presentation. Since measures were taken after the course for the pre course level of knowledge, it may be that presenters refocused their material, emphasizing its more technical components. This interpretation receives some support from the pattern between groups. Whereas in the previous session, those without training in economics rated their pre course knowledg~ higher, in this session the difference between the groups was greatly diniiiiished. The pattern of •difference continued to be mixeci between those group with varying levels of experience. The ability to compare the same module given at different session adds an additional dimension to this evaluation. It suggests that the feedback obtained earlier can be used effectively to address any perceived weaknesses in the module's content or presentation. Module7 Results from the Module 7 evaluation questions are provided in TABLES 13A and 13B(See Annex). TABLE 13A summarizes responses to the standard set of questions asked of all modules. According to these results, participants gave relatively high rating to the module. Mean scores tended to fall within the upper 4.0 and 5.0 range, with the percentage in the 5-6 range going from slightly over 60% (63.3%) to over 85% (86.4%). Noticeably the least valued component of the module was the usefulness of the country studies. Both the mean score and percent in the 5 - 6 range was the lowest of the group. Trainers' knowledge and responsiveness to questions were given the highest ratings. When examining the two primary subgroups, some interesting differences emerge. Those participants reporting degrees/certificates in economics give significantly higher ratings 35 to the module than those without these credentials. Again, this may reflect the nature of the module's subject matter, which was heavily oriented towards economic concepts and methods. The highest ratings for both of the economics based groups was for trainers knowledge and ability to answer questions. This suggests that the performance of trainers in this module was -highly valued, regardless of the participant's background or training in economics. There is a similar difference when viewing the group by their level of experience in the healthcare field. The observed differences, however, were not in the anticipated direction. It appears that those participants with less experience give the module's features a higher rating than those with greater experience. These differences were not as great as those between the two economic groups and there were some exceptions. Still, a clear difference can be seen. Our expectation was that those with greater experience would find the course more relevant and rate it higher. It may be that those with less experience have not been exposed to this information as much as the more experienced group and therefore found it more relevant to their work in the healthcare field. Results from the percentage of respondents whose ratings were in the 5 - 6 range suggest that this module was well received. All but one of the items rated, that rating the degree to which participants found the country studies useful, were well above the 66% benchmark. Even this exception was close to that level, showing almost 64% (63.6%) scoring in this range. This pattern is consistent across subgroups. When the country studies item 'is broken down by group, only two of the four subgroups -- those without degrees/certificates in economics and those with more than 10 years experience -- score under the 66% level in the 5 -·6 range: This consistency suggests a very strong showing by this module. Results from the self-assessment of gain in knowledge as a result of Module 7 are shown in TABLE 13B. These results show pre/post course gain comparable to other modules. Participants rated their level of pre course knowledge at a low level, all falling within the 3.0 range. Gains in knowledge as measured by post course mean scores were impressive. The percentage gain ranged from 32.9% to a high of 62.1%. The greatest change came in understanding the rationale of we.ighted capitation formulae, even though this item was not rated lowest in the pre course level of knowledge. Differences seen·between the various subgroups was consistent with their rating on the module's standard items. Those participants with degrees/certificates in economics and those with the least experience tended to have higher pre course mean scores and also smaller gains in post course knowledge levels. 36 Module 10 Module 10, like Modules 1 and 2, included all Flagship Course participants. Results from the evaluation questions are provided in TABLES 14A and 14B(See Annex). In the standard set of questions, results of which are shown in TABLE 14A, this module added additional questions. These were an extension of the basic question, "Was the module a worthwhile use of your time?" In addition to that general question, the same question was asked about three specific aspects of the module. The additional topic explored were program planning and financial management, human resource management, and managing quality of care. Results from these questions allow for some additional insight into what aspects of the course were most valued. Results on the standard set of evaluation questions show a moderate rating overall for this module. Mean scores fell within the 4.0 range while the percentage within the upper levels, 5 -6 range, was 40% to 50% Some results, however, suggest that there was some displeasure with the module. The lower range for all of the items was 1 - 2, suggesting that some participants gave very low scores to all of the items. Of particular interest are the scores received by the additional questions, those which extended the question of "worthwhile use of time," to three specific areas. These three items differed in their mean scores from the general question and from each other. They also represented two of the three items on which a score of "1" was received. These differences show that results may change when the question is made more specific . It suggest that other items may be affected similarly when made more specific. This may raise some doubt about the validity •of this using a standard question across all modules and an indicator of satisfaction. When the different subgroups are compared, there appears to be no dominant pattern of differences between either group. Participants with and without degrees/certificates in Economics do not differ greatly on these items and their is no pattern or direction to these differences. The results are similar for those with varying levels of experience in the healthcare field, although it does appear that those with the least experience did rate the module higher across the items measured. Results from viewing the 66% benchmark criterion results suggest that this module may need to explore ways of improving its overall content and delivery. Of the items assessed for the overall group, none were above the 66% level for participants scoring in the 5 - 6 range. This result was fairly consistent across subgroups. For the economic degree/certificate set of subgroups, only one item, the degree to which participants felt trainers were clear, was above 66%, at 70.8%, for those with these credentials. No other item fell into that range for these two subgroups. Results were only slightly more positive for the two groups with different levels of experience. For those with the greatest amount of experience, three items fell above 66%. These findings suggest a need to review all aspects of the module, both course content and delivery. 37 Results from the pre/post course self assessment are shown in TABLE 14B. When compared with other modules, these results suggest that participants felt they had gained somewhat less. The pre course mean values were not atypical, falling in the 3.0 range. However, they do not appear to have increased in their post course means scores as much a some of the earlier modules. The percentage increase ranges from approximately 30% - to 40%. This is noticeably less than the gains posted by other modules. When the two subgroups are viewed, those with degrees/certificates in economics do seem to have made smaller gains than those with out this training, although this pattern is reversed on several items. The pattern for those with different levels of experience in the healthcare field is less clear. for most items, those with the most experience register the greatest gains in knowledge. There are also several items where this pattern is reversed. But overall, even among the subgroups, it does not appear that the gains in post module knowledge were as large as those registered in other modules. • 38 PARTV: CONCLUSIONS In drawing conclusions from these evaluation results, one very important point should be considered.. This is, that the course, in both its content and presentation, as well as the evaluation of that course, w~re pilot efforts. The course differed in many ways from other EDI core courses. It used a variety of teaching/learning methods in a unique . setting, where a large number of participants are brought together and receive training in several subject area modules, in order to provide valuable training in heal~care reform and financing. It is included an independent evaluation that addressed issues of process and outcomes for the overall course and specific modules. The evaluation methods selected and used varied, and attempted to provide rapid feedback on course performance as well as more summative, end of course information. Viewing the findings from this perspective makes them appear as valuable lessons learned for improving both the course and how to evaluate that course most effectiveiy, not as an attempt to discover what went wrong or to uncover some inherent weaknesses. The objective of the pilot effort was to discover where improvement could be made and to make these improvements in future offerings, including how best to·identify these areas through evaluation. When viewed from this perspective, the following conclusions provide some direction to • improving the course and subsequent evaluation efforts: • Overall, the course met its short term objectives of providing coordinated and exhaustive .training in the .general and specific areas of healthcare reform and financing. This result can be seen in the very favorable ratings .given to the overall course and specific modules through the various feedback instruments. Consistently favorable ratings do indicate a high level of satisfaction with all aspects of the course, including what and how much was learned. • There appears to be some significant differences among participant subgroups that could be used ,to improve and strengthen the overall course and specific modules. Participants with degree/certificates in economics found the material to be less complex and difficult than those without this . background. These and . similar differences between these two groups seemed to surface throughout the course. There were also differences observed between participants with varying levels of experience, although the pattern of these·differences was less obvious. Staff planning future courses, models and presentations should consider the audience, or audience mix, when selecting materials, content and methods of presentation: It may also be valuable to continue monitoring the same subgroups and possibly add others to future course evaluations. 39 • • Results from the cognitive test used for Module 1 raises some important issues about the validity of standard self-assessments. As noted previously, results from the cognitive tests questions used in Module 1 suggest that participants may not be learning the specific material, or mastering this information to the extent intended by course planners. This is inconsistent with the pre/post course self-assessment questionnaire results. These results show substantial gains in the level of information, according to the participants. Future evaluations of this course at least should consider building cognitive testing into the other modules as a way of verifying the present discrepancy found in Module 1. Should this persist across other modules, the use of self-assessment measures should be reconsidered, with possibly more emphasis given to the development of a cognitive based set of indicators. • While the possible lack of validity of self-assessment measures as indicators of learning has been raised, it should not be overlooked that such ~despread consensus on the course's value may indicate some dimension of learning that is different from that measured with cognitive indicators. The result that participants perceived such large gains in information may be important and meaningful, even though they could not demonstrate a certain level of mastery of this material through cognitive questions. These two types of fr1dicators may be measuring different dimensions to learning. Some future attempt should be· made to distinguish between these dimensions before abandoning the use of self-assessment instruments in Javor of a purely cognitive based system. • Findings from the evaluation identify many module specific items that should be reviewed further and considered as the basis for making improvements. Some of these items were raised in several modules and appear to be more general concerns. Following are the set of common items identified across modules as likely problems that may be amenable to change: 1) Participants felt that they did not have sufficient time to prepare for course and daily presentations. They were asked to prepare for Module 1 prior to arriving for the course, prepare prior to starting a new module and prepare for daily presentations. A frequent complaint heard was that it required too much time to prepare or the material was too difficult. 2) A related complaint was that the materials required in the review were not used or discussed in the ensuing presentations. These results are reflected in the end of module questionnaires. 3) The tutor was used very little and, when used, was not rated very highly. There appeared to be a consensus on this issue across • 40 modules. It may be valuable to reconsider using tutoring support as designed. 4) Participants often criticized course content, especially case examples as not relevant to their countries, or to developing countries. It may be helpful for future presentations to reconsider these examples and utilizing those that participants feel they can relate to more readily. 41 IANNEX I Page 43 Table7A MODULE 2 EVALUATION QUESTIONS 10 Years of Experience or Greater than 10 Years of MODULE 2 - EVALUATION QUESTIONS All Participants Economic Degrees No Economic Degrees Less Experience - ---·•·· . •. ···-· - ···· ···-· Rating of the Following Course lteins: Mean 5.24 %5 Range -Mean 5.28 %5 Range Mean 5.24 %5 Range Mean 5.30 %5 Range S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N 5.17 Mean %5 Range Usefulness of background papers .83 80.9 3 6 68 .84 84.4 3 6 25 .82 81 3 6 42 .73 84.8 4 6 33 .92 77.1 3 6 35 · -- 4.93 4.96 4.93 5.00 4.86 Clarity of background papers .83 75.5 3 6 68 .84 80.0 3 6 25 .84 76.2 3 6 42 .66 78.8 4 6 33 .97 74.3 3 6 35 -- 4.78 4.92 4.74 4.82 4.74 Usefulness of country studies .93 61 .8 3 6 68 .91 ·12.0 3 6 25 .91 57.1 3 6 41 .85 60.6 3 6 33 1.01 62.9 3 6 35 - 5.03 5.12 5.00 5.03 5.03 Usefulness of case study method 1.04 73.1 1 6 67 .93 80.0 3 6 25 1.12 70.7 1 6 42 1.13 72.7 1 6 33 .97 93.5 2 6 34 4.54 4.46 4.60 4.58 4.50 Usefulness of Policy Maker !raining 1.09 49.3 2 6 67 1.10 45.8 2 6 24 1.11 52.4 2 6 42 1.09 48.5 3 6 33 1.11 50.0 2 6 34 ----- ···· ··----- 5.20 5.48 5.36 5.33 5.46 Trainers' knowledge of Issues .81 85.3 3 6 68 .82 88.0 3 6 25 .82 83.3 3 6 42 .85 81 .8 3 6 33 .78 88.6 3 6 35 - · 5.24 5.16 5.31 5.18 5.29 Trairiers' answers to questions .81 82.4 3 6 68 .80 84.0 3 6 25 .81 83.3 3 6 42 .81 . 81.8 3 6 33 .83 82.9 3 6 35 ----- - -· - -- - 5.31 5.40 5.29 5.24 5.37 Trainers' clarity .80 82.4 3 6 68 .71 88.0 4 6 25 .83 81 .0 3 6 42 .79 84.8 3 6 33 .81 80.0 4 6 35 5.36 5.25 5.43 5.29 5.50 Adequacy of tutoring .81 81 .8 4 6 11 .96 75.0 4 ... 6 4 .79 85.7 4 6 7 .76 85.7 4 6 7 1.00 75.0 4 6 4 --·-· -- 4.82 4.92 4.79 4.76 4.89 Deg_,:~e~~pies treated in-deplh .90 64.7 3 6 68 1.00 72.0 2 6 25 .84 61.9 3 6 42 .83 57.6 3 6 33 .96 71.4 3 6 35 5.22 4.88 5.31 5.12 5.17 Degree module relevant to work .67 86.6 4 6 67 1.20 83.3 * 6 24 .68 81.1 4 6 42 1.14 87.5 * 6 32 .66 85.7 4 6 35 -- . - .... ··--·-- -- 5.21 5.22 5.23 5.29 5.13 Degree worthwhile use of time .77 82.5 3 6 63 .74 82.6 4 6 23 .78 84.6 3 6 39 .78 80.6 4 6 31 .75 84.4 3 6 32 Page 45 Table78 MODULE 2 PRE/POST EVALUATION QUESTIONS 10 Years or Less of Greater than 10 Years of MODULE 2 - PRE/POST QUESTIONS All Participants Economic Degrees No Economic Degrees Experience Experience Mean Gain Mean Gain Mean Gain Mean Gain Mean Gain - -·--· .... - ····· --- Rating of Pre/Post Knowledge on the Following Topics: Pre Post Total % Pre Post Tolal % Pre Post Total % Pre Post Total % Pre Post Total % Key Structural Components or the Health System 3.69 5.10 1.41 38.2 3.96 5.2 1.24 31.3 3.55 5.07 1.52 42.8 3.58 5.12 1.55 43.3 3.80 5.09 1.29 33.9 - - --- ---- How Components Detennlne Perfonnance 3.64 5.11 1.47 40.4 3.83 5.25 1.42 37 .1 3.51 5.02 1.51 43.0 3.59 5.03 1.44 40.1 3,68 5.18 1.50 40,8 -- - - - - Need for Causal Oiognoslic Model 3.43 5.09 1.66 48.4 3.52 5.2 1.68 47.7 3.37 5.02 1.66 49.3 3.31 5.03 1.72 52.0 3.54 5.14 1.60 45.2 - ---- - - --- Understand Baniers to Rcronn 3.52 5.21 1.69 48.0 3.60 5.28 1.68 46.7 3.49 5.20 1.71 49.0 3.72 5.09 1.38 37.1 3.34 5.31 1.97 59.0 - --- - - - - - - - ···· Recognize Key Players 4.26 5.31 1.04 24.4 4.32 5.32 1.00 23.1 4.24 5.33 1.10 49,0 4.33 5.24 0.91 21.0 4.20 5.37 1.17 27.9 - - - - ----··· . -- - - -· - -·- --- ---·-- - - -- - - - 3.40 5.13 1.74 51.2 3.12 4.88 1.76 56.4 3.55 5.29 1.74 49.0 3.55 5.18 1.64 46.2 3.26 5.09 1.83 - 56.1 Assess Political Options __ _ .. - -- ---- -- - -- Understand Role or Incentives 3.96 5.26 1.31 33.1 4.12 5.40 1.28 31.1 3.88 5.19 1.31 33.8 4.03 5.27 1.24 30.8 3.89 5.26 1.37 35.2 -··- -- ---- - - - - -- -- --------- Understand Financial Incentives Effect 3.97 5.21 1.24 31 .2 4.24 5.36 1.12 26.4 3.83 5.15 1.32 34 .5 4.09 5.27 1.18 28.9 3.85 5.15 1.29 33.5 ------ ------ - ·-- - - Undersla~d How Employee Behavior Is Influenced 3.65 4.91 1.26 34.5 3.72 4.96 1.24 33.3 3.62 4.90 1.29 35.6 3.76 4.88 1.12 29.8 3.54 4.94 1.40 39.5 - - -- - - - -· - - Understand Role or Coordination in lnsU!ullonal Refonn 3.85 5.12 1.26 32.7 3.76 5.08 1.32 35.1 3.90 5.14 4.24 31.8 4.06 5.09 1.03 25.4 3.66 5.14 1.49 40.7 - -·- -- ----· - - Understand Structure or NHA's 3.46 4.69 1.49 43.1 4.04 5.04 1.00 24.8 3.12 4.90 1.78 57.0 3.45 4.88 1.42 41.2 3.47 5.03 1.56 45.0 ---- .. ------- - - - - - -- - -- How to Use NHA's for Analysis 3.49 4.97 1.48 42.4 4.00 5.04 1.04 26.0 3.17 4.93 1.76 55.5 3.48 4.91 1.42 40.1 3.50 5.03 1.53 43.7 --- -- .. . ---- How lo Use NHA's for Diagnosis 3.18 4.91 1.73 54.4 3.48 4.92 1.44 41.4 3.00 4.93 1.93 64.3 3.24 4.94 • 1.10 52.5 3.12 4.88 1.76 56.4 Page46 Table8A MODULE 3 EVALUATION QUESTIONS 10 Years of Experience or Greater than 10 Years of MODULE 3 • EVALUATION QUESTIONS All Participants Economic Degrees No Economic Degrees Less Experience Mean %5 Range Mean %5 Range %5 %5 %5 ,-..- -Mean Range ·- Mean - Range Mean - Range Rating of the Following Course Items: S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N 5.42 5.60 5.29 5.58 5.36 Degree content of background papers useful 0.78 83.3 4 6 24 0.70 90.0 4 6 10 0.83 78.6 4 6 14 0.51 100.0 5 6 12 0.92 72.7 4 6 11 - - >--- 5.33 5.60 5.14 5.58 5.18 Degree background papers clear 0.76 83.3 4 6 24 0.70 90.0 4 6 10 0,77 78.6 4 6 14 0.51 100.0 5 6 12 0.87 72.7 4 6 11 -· ·-·- ... ---- -- - -- 5.13 5.2 5.07 5.33 5.00 Degree evidence from countcy studies useful 0.68 83.3 4 6 24 0.79 80.0 4 6 10 0.62 85.7 4 6 14 0.49 100.0 5 6 12 0.77 72.7 4 6 11 5.04 5.0 5.07 5.42 4.73 Degree case method of learning useful 0.75 75.0 4 6 24 0.82 . 70.0 4 6 10 0.73 78.6 4 6 14 0.51 100.0 5 6 12 0.79 54.5 4 ,- 6 11 5.48 . -·- · ·-·- 5.64 5.36 5.67 - 5.42 - Degree trainers knowledgeable of issues 0.87 84.0 3 6 25 0.67 90.9 4 6 11 1.01 78.6 3 6 14 0.65 91.7 4 6 12 1.00 83.3 3 6 12 - - - - - - -- 5.12 5.00 5.21 5.33 5.08 Degree trainers adequately answer queslions 0:93 80.0 3 6 25 ,_ 1.00 72.7 - --- 3 6 11 ··-- · 0.89 85.7 3 6 14 0.65 91 .7 4 ·-- 6 12 1.00 75.0 3 6 12 5.32 5.45 5.21 5.67 5.08 Degree trainers clear 0.95 76.0 3 6 25 0.82 81 .8 4 6 11 1.05 71 .4 3 6 14 0.65 91 .7 4 6 12 1.08 66.7 3 6 12 4.2 4.00 - 4.25 - -- - 5.00 . 4.00 . -- ~ 0.45 20.0 4 5 5 0.0 4 4 1 0.50 25.0 4 5 4 0 100.0 5 .... 0.0 3 Degree tutoring adequate 5 1 4 4 ••••• -·- . ··-· - - -- - - 4.79 4.60 4.93 5.08 4 .55 Degree topics treated In-depth 0.78 66.7 3 6 24 0.97 50.0 3 6 10 0.62 78.6 4 6 14 0.67 83.3 4 6 12 0.82 54.5 3 6 11 ·- . - ·- --- ·· - - 5.16 5.18 5.14 5.17 5.25 Degree module relevant to work 0.75 80.0 4 6 25 0.75 81 .8 4 6 11 0.77 78.6 4 6 14 0.72 83.3 4 6 12 0.75 83.3 4 6 12 5.28 5.27 5.29 5.50 5.17 Degree module worthwhile use of lime 0.79 80.0 4 6 25 0.79 81 .8 4 6 11 0.83 78.6 ,4 6 14 0.52 100.0 5 6 12 0.94 66.7 4 6 12 Page47 Table8B MODULE 3 PRE/POST EVALUATION QUESTIONS 10 Years or Less of Greater than 10 Years of MODULE 3 - PRE/POST QUESTIONS All Participants Economic Degrees No Economic Degrees Experience Experience Mean Gain Meari Gain Mean Gain Mean Gain Mean Gain Rating of Pre/Post Knowledge on the Followlng Topics: Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Undersland fundementals of health care financing 3.39 5.04 1.65 48.7 3.60 5.00 1.40 38.9 3.23 5.08 1.85 57.3 3.08 5.00 1.92 62.3 3.70 5.20 1.50 40.5 - --- - -- - - Understand process offormulallng financing policy 3.16 4.83 1.70 54.3 3.50 4.60 1.10 31.4 2.85 5.00 2.15 75.4 3.08 4.83 1.75 56.8 3.20 5.00 1.80 56.2 Appreciate how NHA's used In flnanacing and funding 3.00 4.75 1.75 58.3 3.40 4.80 1.40 46.7 2.71 4.71 2.00 73.8 2.58 4.58 2.00 77.5 3.45 5.00 1.55 44.9 - -- - - - - -·-- Undersand economic impacts or various financing approaches 3.13 5.00 1.88 60.1 3.70 5.00 1.30 35.1 2.71 5.00 2.29 84.5 3.25 5.33 2.08 64.0 3.00 4.73 1.73 57.7 Undersland social/heallh impacls of various financing approaches 3.04 4.83 1.79 55.9 3.20 4.80 1.60 50.0 2.93 4.86 1.93 65.9 3.00 4.83 1.83 61.0 3.09 4.91 1.82 58.9 -· Comprehend key issues of govern. operaled social insurance 2.79 4.71 -1.92 68.8 3.00 4.80 1.80 60.0 2.64 4.64 2.00 75.8 2.75 4.75 2.00 72.7 2.73 4.73 2.00 73.3 - - ·--· ·· - . --·-· --- - - - - ------ ·- -- - - - --- - ---- -- Comprehend key issues of mandaled social insurance 2.48 4.52 2.04 82.3 2.78 4.56 1.78 64.0 2.29 4.50 2.21 96.5 2.45 4.55 2.09 85.3 2.36 4.55 2.18 92.4 - -- - - --- ------- ··-- .. --- ·- -· -·--· - . - -· Comprehend key Issues of user fees 3.21 4.83 1.63 19.6 3.50 4.60 1.10 31.4 3.00 5.00 2.00 66.7 2.92 4.67 1.75 59.9 3.45 5.09 1.64 47.5 ----- - - ··-··-- - -- - --· Comprehend key Issues for community financing 2.79 4.58 1.79 64.2 2.60 4.20 1.60 61 .5 2.93 4.86 1.93 65.9 2.67 4.50 1.83 68.5 2.82 4.73 1.91 67.7 ---- Comprehend key Issues for medical savings accounts 2.00 4.25 2.25 112.5 1.90 4.00 2.10 110.5 2.07 4.43 2.36 114.0 2.0 4.50 2.50 125.0 1.91 4.00 2.09 109.4 Undersland requirements for privale heallh insurance Implement. 2.46 4.08 1.63 66.3 2.90 4.10 1.20 41.4 2.14 4.07 1.93 90.2 2.58 4.17 1.58 61.2 2.27 4.00 1.73 76.2 Page 48 Table9A MODULE 4 EVALUATION QUESTIONS 10 Years of Experience or Greater than 10 Years of MODULE 3 - EVALUATION QUESTIONS All Participants Economic Degrees No Economic Degrees Less Experience Mean %5 Range Mean %5 Range Mean %5 Range Mean %5 Range Mean %5 Range Rating of the Following Course Items: S.D. ors Min Max N S.D. ors Min Max N S.D. ors Min Max N S.D. ors Min Max N S.D. ors Min Max N 4.95 5.11 4.63 4.63 5.00 Degree content background papers useful 0.74 71.4 4 s 21 0.76 77.8 4 6 9 0.72 66.7 4 6 12 0.98 50.0 4 6 6 0.65 80.0 4 6 15 4.86 5.11 4.67 4.83 4.87 Degree bacl 71--- 6-6_.7 - +-- 3-! ___ 5 12 0.83 73.3 4 6 15 4.52 4.45 4.56 4.33 4.67 D 1__e=-gr_ee_l_ra_ a_ in_e_rs_a_d_eq.:.u_at_e..:..ly_ ns _w_e_rq u ..:.._e_ o_ st_i _ _ _ __ _ _1_ 0 ns _._64-+-_4_4_.4-l- - 4+-_ 6-+-2--17 0.52 45.5 4 5 11 0.73 43.8 4 6 16 0.49 33.3 4 5 12 0.72 53.3 4 6 15 4.59 4.64 4.56 4.50 4.67 Degree trainers clear 0.75 59.3 3 6 27 0.67 72.7 3 5 11 0.81 50.0 3 6 16 0.67 58.3 3 5 12 0.82 60.0 3 6 15 1-..:.....-- - -- - -- - - - - -- -- - - 1 - - - + - - - t - - - - 1----+- - - - - -+-- -+--+--f--il-----+----+----+- -+ - 4.00 5.00 3.50 4.00 4.00 ~-~~r~e-~".~oring adequate, __ _ _ _ _ _ __ _ _ _ _ , __ 1._0~0,__3_3_.3_,__ 3-+-- 5+-_ 3•--••_•,__10 _0 _ ,__5 _.0 _,__5-+-_1, __ .7_1+---_0_._o,___3,___4_,__2-+-_ •__ 0_ ", __ 0_.0_1_ _ 41----4-+----1 1.41 50.0 3 5 2 4.41 4.55 4.31 4.42 4.40 D_e=-gr_ee_t_oc....pl_cs_t_re_at_e_d_ln_s_uffi_,c 1_ _ le _n_ td _e..:..p_th_ _ _ _ _ _ _ _ _ 1__0_._6_91---4_4_.4-+--3+----16 27 0.52 54.5 4 5 11 0.79 37.5 3 6 16 0.51 41.7 4 5 12 0.83 46.7 3 6 15 5.00 5.00 5.00 4.83 5.13 Degree module relevanllo wont 0.62 81.5 4 6 27 0.77 72.7 4 6 11 0.52 87.5 4 6 16 0.39 83.3 4 5 12 0.74 80.0 4 6 15 1----------------- - - - 1 - - - - - - - t - - - 1 - - - - --------- --·----- ----- - 4.67 5.00 4.44 4.58 4.73 Degree module worthwhile use ol Ume 0.73 59.3 3 6 27 0.63 81 .8 4 6 11 0.73 43.8 3 6 16 0.67 50.0 4 6 12 0.80 66.7 3 6 15 Page 51 Table108 MODULE 5 PRE/POST EVALUATION QUESTIONS 10 Years or Less of Greater than 10 Years of MODULE 5 - PRE/POST QUESTIONS All Participants Economic Degrees No Economic Degrees Experience Experience Mean Gain Mean Gain Mean Gain Mean Gain Mean Gain - - -- - Rating of Pre/Post Knowledge on the Following Topics: Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Define basic package of health services 3.41 5.11 1.70 49.9 3.82 5.27 1.45 38.0 3.13 5.00 1.88 60.1 3.42 5.08 1.67 48.8 3.40 5.13 1.73 50.9 - -· - - ---- --- - - - - - - ·- - - -- - - -- - - - - - - -· - - Identify approaches available to decide on services 3.11 5.04 1.93 62.1 3.36 5.09 1.73 51.5 2.94 5.00 2.06 70.1 3.00 4.92 1.92 64.0 3.20 5.13 1.93 60.3 - ------- --- --- Identify strengths/Weaknesses of approaches 2.93 4.81 1.89 64.5 3.18 4.82 1.64 51 .6 2.75 4.81 2.06 74.9 2.83 4.83 2.00 70.7 3.00 4.80 1.80 60.0 - ---- - -- -· 3.11 5.19 2.07 66.6 3.55 5.27 1.73 48.7 2.81 5.13 2.31 82.2 3.17 5.17 2.00 63.1 3.07 Understand rationale of cost-effective techniques for prioritizing --· - - --·--·- - --- -- ·- - -· - - - - -- - - - - -- - - · - - - - - 5.20 - 2.13 - - 69.4 - 0etennlne Info. needed for cost-effective lechnlques 3.11 4.96 1.85 59.5 3.27 5.18 1.91 58.4 3.00 4.81 1.81 60.3 2.92 4.92 2.00 68.5 3.27 5.00 1.73 52.9 . -- - -----·· ·-·. -- •• ·-· -·- ------- . --- Understand how Intervention dala used to prioritize 3.22 4.89 1.67 51 .9 3.27 5.09 1.82 55.7 3.19 4.75 1.56 48.9 2.92 4.83 1.92 65.8 3.47 4.93 1.47 42.4 -- f--- Understand how cost data used to prioritize 3.11 4.63 1.52 48.9 3.55 4.82 1.27 35.8 2.81 4.50 1.69 60.1 3.17 4.75 1.58 49.8 3.07 4.53 1.47 47.9 Understand value of demand lnfonnatlon 3.19 4.96 1.78 55.8 3.55 5.00 1.45 40.8 2.94 4.94 2.00 68.0 3.08 4.92 1.83 59.4 3.27 5.00 1.73 52.9 -· Understand how society's preferences In basic package 3.44 5.22 1.78 51 .7 3.91 5.55 1.64 41 .9 3.13 5.00 1.88 60.1 3.42 5.17 1.75 51.2 3.47 5.27 1.80 51 .9 - Understand use of society's preferences In prioritizing services 3.37 4.93 1.56 46.3 3.73 5.18 __ 1.45 __ _.,_ 38.9 3.13 4.75 1.63 52.1 3.17 4.83 1.67 52.7 3.53 5.00 1.47 41 .6 -- -- - . - -·-·· -·- -- -- -- - - - --- - - -- - - - ·- -- - -· - - - - - - - Appreciate practical difficulties of Implementing a basic package . 3.30 5.33 2.04 61.8 3.91 5.45 1.55 39.6 2.88 5.25 2.38 82.6 3.00 5.25 2.25 75.0 3.53 5.40 1.87 53.0 -· - --· - -- -- Identify approaches to overcome/address difficulties 2.89 4.70 1.81 62.6 3.36 4.73 1.36 40.5 2.56 4.69 2.13 83.2 2.75 4.58 1.83 66.5 3.00 4.80 1.80 60.0 Page 52 Table11A MODULE 6A EVALUATION QUESTIONS 10 Years of Experience or Greater than 10 Years of MODULE 6A - EVALUATION QUESTIONS All Participants Economic Degrees No Economic Degrees Less Experience ------·-- -·-·- .. Rating of the Following Course Items: -·--·-- - - -Mean S.D. %5 or6 Range Min Max N - Mean S.D. %5 or6 Range Min Max N Mean S.D. %5 or6 Range Min Max N - Mean S.D. %5 or6 Range Min Max N -Mean S.D. %5 or6 Range Min Max N 4.89 4.63 5.09 4.50 5.18 Degree content of background papers useful 0.88 68.4 3 --- 6 19 1.06 -- ., . 50.0 3 6 8 0.70 81.8 4 6 11 0.76 62.5 3 5 8 0.87 72.7 4 6 11 --·- 5.11 4.75 5.36 4.88 5.27 Degree background papers clear 0.81 73.7 4 6 19 0.89 50.0 4 6 8 0.67 90.9 4 6 11 0.83 62.5 4 6 8 0.79 81.8 4 6 11 4.58 4.13 4.91 4.25 4.82 Degree evidence from country studies useful 0.96 52.6 2 6 19 - 1.13 ,._ 25.0 2 6 8 0.70 72.7 4 6 11 1.16 37.5 2 6 8 0.75 63.6 4 6 11 4.37 3.75 4.82 3.88 4.73 Degree case method useful 1.12 47.4 2 6 19 1.28 25.0 2 6 8 0.75 63.6 4 6 11 1.25 25.0 2 6 8 0.90 63.6 3 6 11 4.63 4.38 4.82 4.38 4.82 Degree trainers knowledgeable of issues 1.16 57.9 2 6 19 1.41 50.0 2 6 8 0.98 63.6 3 6 11 0.92 37.5 3 6 8 1.33 72.7 2 6 11 - - - - ----- - - ------· -··------- - -~ - - 4.47 4.0 4.82 4.00 4.82 Degree trainers adequately answer questions 1.17 52.6 2 6 19 1.51 37.5 2 6 8 0.75 63.6 4 6 11 0.93 37.5 3 5 8 1.25 63.6 2 6 11 ···-- ··---- 4.63 4.63 4.64 ~.50 4.73 Degree trainers clear 0.90 57.9 3 6 19 1.19 62.5 3 6 8 0.67 54.5 4 6 11 0.76 62.5 3 5 8 1.01 54.5 3 6 11 Degree tutoring adequate ------------ --- ... 5.00 ----- - 100.0 5 5 1 ... ... ... ... 0 ... 5.00 100.0 5 5 1 ... ... ... ... 0 ... 5.00 100.0 5 5 1 4.37 3.88 4.73 3.88 4.73 Degree topic treated In-depth 0.83 4.84 52.6 2 5 19 0.99 4.50 25.0 2 5 8 0.47 5.09 72.7 4 5 11 0.99 4.75 -25.0 - 2 - ,_ 5 8 -- 0.47 72.7 4.91 4 5 11 --- Degree Module relevant to worl< 1.17 63.2 2 6 19 1.41 62.5 2 6 8 0.94 63.6 4 6 11 1.49 75.0 2 6 8 0.94 54.5 4 6 11 I- - - --····· - 4.68 4.25 5.00 4.38 4.91 Degree Module worthwhile use of time 1.06 63.2 2 6 19 1.28 50.0 2 6 8 0.77 72.7 4 6 11 1.30 62.5 2 6 8 0.83 63.6 4 6 11 Page 53 Table11B MODULE 6A PRE/POST EVALUATION QUESTIONS 10 Years or Less of Greater than 10 Years of MODULE 6A • PRE/POST QUESTIONS All Participants Economic Degrees No Economic Degrees Experience Experience Mean Gain Mean Gain Mean Gain Mean Gain Mean Gain Rating of Pre/Post Knowledge on the Followlng Topics: Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Appreclalo con sumer problems as services purchasers 4.00 5.05 1.05 26.3 4.00 4.63 0.63 15.8 4.00 5.36 1.36 34.0 3.88 4.88 1.00 25.8 4.09 5.18 1.09 26.7 - ··- -- -- - -· · - - - ---- Identify problems of publicly financed/provided heallh 4.05 5.11 1.05 26.0 4.13 5.00 0.88 21 .3 4.00 5.18 1.18 29.5 4.25 5.00 0.75 17.6 3.91 5.18 1.27 32.5 - - --- ---- -·--·- - - ---· ·---- ... . ···· - · ·- - -·- - - --- - -- Undersland reasons for separaling services purchasing/providing _ 3.58 5.32 1.74 48.6 3.50 5.00 1.50 42.9 3.64 5.55 1.91 52.5 3.50 5.13 1.63 46.6 3.64 5.45 1.82 50.0 ··-·· - --- . -- - -- - ---- ----·· ···--- - --- -- -- - ---- - -• ·- Appreciale Importance of purchasing organizalions 3.42 5.16 1.74 50.9 3.25 4.88 1.63 50.2 3.55 5.36 1.82 51 .3 3.25 5.13 1.88 57.8 3.55 5.18 1.64 46 .2 --· - -- ··- - - -- - - - ···---- - - - - - ---- - --- Appreclale polen_ l(~I llmllallons of a purchasing organlzallon 3.06 4.89 1.83 59.8 3.00 4.75 1.75 58.3 3.10 5.00 1.90 61 .3 3.00 4.88 1.88 62.7 3.10 4.90 1.80 58 .1 - - - - - - - - -- --- - · Explain how purchasers creale incenlives for cost-conlainmenl 3.63 4.95 1.32 36.4 3.38 4.63 1.25 37.0 3.82 5.18 1.36 35.6 3.50 4.88 1.38 39.4 3.73 5.0 1.27 34 .0 -- ·--- - - - -- · · ·-- - - ·- f-- Define autonomy In provision for health services by public provider 3.68 4.89 1.21 32.9 3.38 4.38 1.00 29.6 3;91 5.27 1.36 34.8 3.75 4.63 0.88 23.5 3.64 5.09 1.45 39.8 - -- --- -- Explain why aulonomy importanl lo performance and quality 3.89 5.11 1.21 31 .1 3.75 4.88 1.13 30.1 4.00 5.27 1.27 31.8 4.00 4.75 0.75 18.8 3.825.36 1.55 40.6 --------- - ----- -- - -·- - ···· -- - -- -- - - - Appreciate problems of setting up autonomous organizalions 3.79 5.11 1.32 34.8 3.50 4.88 1.38 4.00 39.4 , _ 5.27 1.27 31.8 3.88 5.00 1.13 29.1 3.73 5.18 1.45 38.9 - - --- ---- -- -- - . --- ·- - - Undersland role of conlracling 3.47 5.21 1.74 50.1 3.50 4.75 1.25 35.7 3.45 5.55 2.09 60.6 3.50 5.13 1.63 46.6 3.45 5.27 1.82 52.8 Undersland types of contracting 3.16 4.95 1.79 56.6 3.13 4.25 1.13 36.1 3.18 5.45 2.27 71.4 3.25 4.63 1.38 42.5 3.09 5.18 2.09 67.6 --- --- - - --- Undersland how to develop a conlracl 3.11 4.68 1.58 50.8 3.00 4.50 1.50 50.0 3.18 4.82 1.64 51.6 3.13 4.75 1.63 52.1 3.09 4.64 1.55 50.2 Page 54 Table12A MODULE 68 EVALUATION QUESTIONS 10 Years of Experience or Greater than 10 Years of MODULE SB -EVALUATION QUESTIONS All Participants Economic Degrees No Economic Degrees Less Experience Rating of the Foll owing Course Items: Mean S.D. 5.08 %5 ors Range Min Max N - Mean S.D. 5.00 %5 --- ors Range Min Max N Mean S.D. 5.13 %5 ors Range Min Max N Mean S.D. 5.33 %5 ors Range Min Max N Mean S.D. 5.11 %5 or6 Range Min Max N Degree content of background papers useful 0.66 84.6 3 6 13 0.71 60.0 4 6 5 0,99 67.5 3 6 8 0.58 100.0 5 6 3 0.93 88.9 3 6 9 . ··------·---- - - 5.00 5.00 5.00 5.33 4.69 Degree background papers clear 0.82 84.6 3 6 13 0.71 80.0 4 6 5 0.93 87.5 3 6 8 0.58 100.0 5 6 3 0.93 77.8 3 6 9 4.85 4.40 5.13 4.67 4.78 Degree .evidence from country studies useful 1.34 61.5 2 6 13 1.14 40.0 3 6 5 1.46 75.0 2 6 6 1.15 33.3 4 6 3 1.48 66.7 2 6 9 --·- -· ---- - - - - 4.85 4.20 5.25 5.00 4.69 Degree case method useful 1.68 69.2 1 6 13 1.48 40.0 2 6 5 1.75 87.5 1 6 8 1.00 66.7 4 6 3 1.96 77.8 1 6 9 - --- --- 5.08 4.60 5.38 5.00 5.11 Degree trainers knowledgeable of issues 0.95 76.9 3 6 13 1.14 60_0 3 6 5 0.74 87.5 4 6 8 1.00 66.7 4 6 3 1.05 77.8 3 6 9 -- ----- ---- - ----. ···--- - - - - 5.15 4.80 5.38 5.33 5.11 Degree trainers adequately answer questions 0.80 76.9 4 6 13 0.84 60.0 4 6 5 0.74 87.5 4 6 8 1.15 66.7 4 6 3 0.78 77.8 4 6 9 · - - - - - -, - ---- 4.92 4.50 5.13 5.00 4.89 Degree trainers clear 0.90 75.0 3 6 12 0.58 50.0 4 5 4 0.99 67.5 3 6 6 1.41 50.0 4 6 2 0.93 77.8 3 6 9 - - --· ·--· ---- Degree tutoring adequate -------·- ·-·· ... 6.00 100.0 6 6 1 ... ... ... ... ... ... 6.00 100.0 6 6 1 ... ... ... ... ... ... - - - - - - - - -- - - 6.00 100.0 6 6 1 4.62 4.40 4.75 4.33 4.67 Degree topic treated in-depth 1.04 61 .5 2 6 13 0.55 40.0 4 5 5 1.28 75.0 2 6 8 0.58 33.3 4 5 3 1.22 66.7 2 6 9 ·- · 5.00 4.60 5.25 5.00 5.00 Degree Module relevant to work 1.41 76.9 1 6 13 0.55 60.0 4 5 5 1.75 87.5 1 6 8 1.00 • 66.7 4 6 3 1.66 77.8 1 6 4.85 - ·• · 5.20 4.63 5.67 4.56 -9 Degree Module worthwhile use of time 1.34 76.9 1 6 13 0.84 80.0 4 6 5 1.60 75.0 1 6 8 0.58 100.0 5 6 3 1.51 66.7 1 6 9 Page 55 Table12B MODULE 6B PRE/POST EVALUATION QUESTIONS 10 Years or Less of Greater than 10 Years of MODULE 6B - PRE/POST QUESTIONS All Participants Economic Degrees No Economic Degrees Experience Experience - -- - - - Mean Gain Mean Gain Mean Gain Mean Gain Mean Gain - --- ------- ·-·--·- -. .. ----- · - Rating of Pre/Post Knowledge on the Following Topics: Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Appreciate consumer problems as services purchasers 3.92 5.31 1.38 35.2 4.00 5.40 1.40 35.0 3.88 5.25 1.38 35.6 4.00 6.00 2.00 50.0 3.89 5.11 1.22 31 .4 - - ·- ·- -- · Identify problems of publicly financed/provided health 4.15 5.46 1.31 31 .6 4.00 5.40 1.40 35.0 4.25 5.50 1.25 29.4 4.33 5.67 1.33 30.7 4.11 5.44 1.33 32.4 Understand reasons for separating services purchasing/providing 3.54 5.38 1.85 52.3 3.60 5.20 1.60 44 .4 3.50 5.50 2.00 57.1 3.33 5.67 2.33 70.0 3.56 5.33 1.78 50.0 Appr'='late importance of purchasing organizations 3.31 5.31 2.00 60.4 3.00 5.20 2.20 73.3 3.50 5.38 1.88 53.7 3.00 5.67 2.67 89.0 3.44 5.33 1.89 54.9 ·- --- - -- Appreciate potenlial llmilations of a purchasing organizalion 3.15 5.15 2.00 63.5 3.00 4.80 1.80 60.0 3.25 5.38 2.13 65.5 3.00 5.67 2.67 89.0 3.22 5.11 1.89 58.7 - ····- -.. - Explain how purchasers create Incentives for cost-containment 3.38 5.46 2.08 61 .5 3.20 5.40 2.20 68.8 3.50 5.50 2.00 57.1 3.00 6.00 3.00 100.0 3.44 5.33 1.89 54.9 - -- - Define autonomy in provision for health services by public provider 3.38 5.54 2.15 63.6 3.80 5.40 1.60 42.1 3.13 5.63 2.50 79.9 3.67 6.00 2.33 63.5 3.22 5.44 2.22 68.9 ---- Explain why autonomy important to pelformance and qualily 3.62 5.54 1.92 53.0 3.40 5.40 2.00 58.8 3.75 5.63 1.88 50.1 3.67 6.00 2.33 63.5 3.56 5.44 1.89 53.1 -- - - - - --- Appreciate problems of setting up autonomous organizations 3.38 5.38 2.00 59.2 3.40 5.40 2.00 58.8 3.38 5.38 2.00 51.2 3.67 6.00 2.33 63.5 3.22 5.22 2.00 62.1 - -· -- - - - - - - --- - - - - - - -- Understand role of contracting 3.46 5.46 2.00 59.2 3.80 5.40 1.60 42.1 3.25 5.50 2.25 69.2 4.00 6.00 2.00 50.0 3.22 5.33 2.11 65.5 -- ·· --·-·· --- - - Understand lypes of contracting 3.08 5.00 1.92 62.3 3.00 4.60 1.60 53.3 3.13 5.25 2.13 68.1 3.00 5.33 2.33 77.7 3.11 5.00 1.89 60.8 - - -- .. - ·- - - Understand how to develop a contract 2.85 5.23 2.38 83.5 2.80 4.80 2.00 71.4 2.88 5.50 2.63 91.3 2.67 5.33 2.67 100.0 2.89 5.33 2.44 84.4 Page 56 Table13A MODULE 7 EVALUATION QUESTIONS 10 Years of Experience or Greater than 10 Years of MODULE 7-EVALUATION QUESTIONS All Participants Economic Degrees No Economic Degrees Less Experience Mean %5 Range Mean %5 Range Mean %5 Range - Mean %5 --···-- ·· Range Mean - %5 -- -- Range Rating of the Following Course Items: S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N S.D. ors Min Max N S.D. or6 Min Max N 5.23 5.60 4.92 5.60 4.92 Degree background papers useful 0.75 81 .8 4 6 22 0.52 100.0 5 6 10 0.79 66.7 4 6 12 0.70 90.0 4 6 10 .0.67 75.0 4 6 12 -- - 5.05 5.10 5.00 5.10 5.00 Degree background papers clear 0.65 81 .8 4 6 22 0.57 90.0 4 6 10 0.74 75.0 4 6 12 0.74 80.0 4 6 10 0.60 83.3 4 6 12 - - ·- - - - ----- • ····-- --- -- - 4.68 5.00 4.42 4.70 4.67 Degree evidence from country studies useful 0.72 63.6 3 6 22 0.67 80.0 4 6 10 0.67 50.0 3 5 12 0.82 70.0 3 6 10 0.65 58.3 4 6 12 4.95 5.10 4.83 5.00 4.92 Degre:_c:ase method of learning useful 0.72 72.7 4 6 22 0.74 80.0 4 6 10 0.72 66.7 4 6 12 0.82 70.0 4 I 6 10 0.67 75.0 4 6 12 -·- --·-···. - 5.23 5.50 5'. oo 5.30 5.17 Degree trainers knowledgeable of issues 0.69 86.4 4 6 22 0.593 100.0 5 6 10 0.74 15:0 4 6 12 0.67 90.0 4 6 10 0.72 83.3 4 6 12 - -·---- - ---- - 5.27 5.50 5.08 5.30 5.25 Degree trainers adequately answer _ '!.u".~\io~s 0.70 86.4 4 6 22 0.53 100,0 5 6 10 0.79 75.0 4 6 12 0.67 90.0 4 6 10 0.75 83.3 4 6 12 5.14 5.20 5.08 - 5.10 --· -- ~ - 5.17 Degree trainers clear 0.71 81 .8 4 6 22 0.63 90.0 4 6 10 0.79 75.0 4 6 12 0.74 80.0 4 6 10 0.72 83.3 4 6 12 - - ~- Degree tutoring adequate · ···--· - ·- ----- 4.75 0.50 -- 75.0 4 5 4 ... 5.00 100.0 5 5 1 4.67 0_ 58 66.7 4 5 3 ... 5.00 100.0 5 5 1 4.67 0.58 66.7 4 6 3 4.91 5.00 4.83 5.00 4.83 Degree topics treated In sufficient depth 0.61 77.3 4 6 22 0.47 90.0 4 6 10 0.72 66.7 4 6 12 0.67 80.0 4 6 10 0.58 75.0 4 6 12 5.05 5.30 4.83 5.20 4.92 Degree module relevant to won< 0.72 77.3 4 6 22 0.67 90.0 -4 6 10 0.72 66.7 4 6 12 0.79 80.0 4 6 10 0.67 75.0 4 6 12 - - - - -· -· - - -- . .. ··-- . 5.09 5.30 4.92 5.20 5.00 Degree module worthwhile use of Ume 0.75 77.3 4 6 22 0.67 90.0 4 6 10 0.79 66.7 4 6 12 0.79 80.0 4 6 10 0.74 75.0 4 6 12 Page 57 Table138 MODULE 7 PRE/POST EVALUATION QUESTIONS 10 Years or Less of Greater than 10 Years of MODULE 7 - PRE/POST QUESTIONS All Participants Economic Degrees No Economic Degrees Experience Experience ·- ·- ·- - ······-·· Mean Gain Mean Gain Mean Gain Mean Gain Mean Gain Rating of Pre/Post Knowledge on the Following Topics: Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Understand consequences of lnfomalion asymment,y 3.86 5.14 1.27 32.9 4.20 5.30 1.10 26.2 3.58 5.00 1.42 39.7 4.10 5.10 1.00 24.4 3.67 5.17 1.50 40.9 .. Predict Impact of incentives on doctors' behavior 3 .73 5.27 1.55 41 .5 3.80 .. 5.30 1.50 39.5 . . . -···· --··· -- 3.67 -- -- - - - 5.25 - - 1.58 - 43.1 3.70 - - - -- - - - - 5.20 1.50 40.5 3.75 5.33 1.58 -- - - 42.1 ·- of____ Predicl impacl of incentives on behavior . hospilal .,, _ _ providers ___ _ ___ 3.33 5.19 1.86 55.9 3.70 5.30 1.60 43.2 3.00 5.00 2.09 69.7 3.44 5.22 1.78 51.7 3.25 5.17 1.92 59.1 - - - - - -- ------- -- . - Understand impact ol user charges on ulilizallon 3.68 5.18 1.50 40.8 4.20 5.40 1.20 28.6 3.25 5.00 1.75 53.8 3.80 5.10 - -- ------ ~ 1.30 -- 34.2 3.58 5.25 1.67 46.6 Understand raUonale of weighted capllatlon formulae 3.14 5.09 1.95 62.1 3.50 5.30 1.80 51 .4 2.83 4.92 2.08 73.5 3.30 5.40 2.10 63.6 3.00 4.83 1.83 61.0 Appreciate capilations implicalions for achieving goals 3.36 5.09 1.73 51.5 3.60 5.30 1.70 47.2 3.17 4.92 1.75 55.2 3.40 5.30 1.90 55.9 3.33 4 .92 1.58 47.4 . - Appreciate rationale/effects of market mechanisms on supply 3.18 4 .86 1.68 52.8 3.50 5.00 1.50 42.9 2.92 4.75 1.83 62.7 . . 5.00 3.20 1.80 . 56.3 3.17 --- ---- - -- 4.75 1.58 --- -- - - - - 49.8 Appreciale ralionale/effects of managed care on supply/demand 3.05 4 .59 1.55 50.8 3.40 4.80 1.40 41 .2 2.75 4.42 1.67 60.7 3.10 4.60 1.50 48.4 3.00 4.58 1.58 52.7 - - - -- - - ········· ------- - - - ---- -- - - - - - ·· - ··· Identify how to evaluale performance of market-lype relorm 3.23 4.68 1.45 44.9 3.50 -4.90 1.40 40.0 3.00 4.50 1.50 50.0 3.20 4.60 1.40 43.8 3.25 4.75 1.50 46.2 Page 58 Table14A MODULE 10 EVALUATION QUESTIONS 10 Years of Experience or Greater than 10 Years of MODULE 10 - EVALUATION QUESTIONS All Participants Economic Degrees No Economic Degrees Less Experience Mean %5 Range Mean %5 Range %5 %5 Range %5 Range _,_ ··--- -Mean ·-·----·-- Range -Mean Mean Rating of the Following Course Items: S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N S.D. or6 Min Max N 4.39 4.36 4.39 4.40 4.39 Degree background papers useful 0.95 49.2 2 6 59 1.01 45.6 2 6 24 0.93 51.5 2 6 33 0.67 52.0 2 6 25 1.03 46.5 2 6 33 4.51 4.50 4.55 · - - ---- - - -···- ... 4.52 .. ·-- ---- -·- - - - - - 4.52 - Degree background papers clear 0.90 55.9 2 6 59 0.93 54.2 3 6 24 0.90 60.6 2 6 33 0.77 60.0 3 6 25 1.00 54.5 2 6 33 -- 4.36 4.29 4.42 4.24 4.45 Degree evidence from country studies uselul 1.05 45.6 2 6 59 1.00 37.5 2 6 24 1.12 54.5 2 6 33 0.93 44.0 2 6 25 1.15 46.5 2 6 33 - - ·· ··-·· .. ···- • ··•• -·-···-·· ___ _ -- --- - - -- _,, 4.36 4.46 4.31 4.17 4.52 Degree learning exercises useful 1.05 44.6 2 6 58 1.06 - 50.0 2 6 24 1.09 43.8 2 6 32 0.62 29.2 2 6 24 1.20 57.8 2 6 33 -- ---- --- 4.69 4.83 4.59 4.56 4.81 Degree trainers knowledgeable of Issues 0.92 62.1 2 6 58 0.96 66.7 2 8 24 0.91 59.4 2 6 32 0.62 56.0 2 6 25 1.00 68.8 2 6 32 4.41 4.63 4.27 4.52 4.33 Deg'!_E'_(reiners adequately answer question~ 1.02 50.8 . .. 1 6 59 1.01 54.2 2 6 24 1.04 51.5 1 6 33 0.82 52.0 2 6 25 1.16 51.5 1 6 33 4.54 · ·- - -- 4.79 4.36 4.48 4.61 ·-- Degree trainers clear 0.97 55.9 2 6 59 0.98 70.8 2 6 24 0.96 45.5 2 6 33 0.62 48.0 2 6 25 1.09 63.6 2 6 33 4.56 4.54 4.61 -- 4.36 4.76 - -- -- - - - Degree module relevant to work 0.95 55.9 2 6 59 0.86 50.0 2 6 24 1.00 60.6 3 6 33 0.91 44.0 2 6 25 0.94 68.7 3 6 33 · -· ---· ··- ---- - --- - --- 4.29 4.33 4.27 4.04 4.52 Degree program planning & fin. mgmt worthwhile use of lime 1.16 44.1 1 6 59 1.17 45.8 1 6 24 1.18 42.4 1 6 33 1.02 26.0 1 6 25 1.23 57.6 1 6 33 - 4.69 4.67 4.76 4.56 4.85 Degree human resource mgmt. worthwhile use of time 1.00 57.6 2 6 59 0.82 45.6 4 6 24 1.12 66.7 2 6 33 0.65 48.0 4 6 25 1.18 66.7 2 6 33 -- - --··· ···-··-- ----- , - - - 4.34 4.42 4.33 4.32 4.39 Degree managing quality of care worthwhile use of time 1.15 45.8 1 6 59 1.02 41.7 2 6 24 1.27 51 .5 1 6 33 1.07 48.0 2 6 25 1.22 45.5 1 6 33 4.39 - 4.46 4.39 4.32 4.48 - Degree module worthwhile use of time 1.00 44.1 2 6 59 0.98 41.7 2 6 24 1.03 48.5 2 6 33 0.60 40.0 2 6 25 1.12 46.5 2 6 33 Page 59 Table148 MODULE 10 PRE/POST EVALUATION QUESTIONS 10 Years or Less of Greater than 10 Years of MODULE 10 - PRE/POST QUESTIONS All Participants Economic Degrees No Economic Degrees Experience Experience Mean Gain Mean Gain Mean Gain Mean Gain Mean Gain Rating of Pre/Post Knowledge on the Following Topics: Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Pre Post Total % Identify steps for effective reform implementation 3.61 4.69 1.08 29.9 3.75 4.63 0.88 23.5 3.58 4.76 1.18 33.0 3.56 4.56 1.00 28.1 3.67 4.62 1.15 31 .3 Conduct implementation analysis lo avoid problems 3.44 4.59 1.15 33.4 3.67 4.63 0.96 26.2 3.27 4.58 1.30 39.8 3.36 4.40 1.04 31 .0 3.48 4.76 1.27 36.5 3.53 4.51 0.98 27.8 3.75 4.54 21 .1 3.39 4.46 1.09 32.2 3.48 4.40 0.92 26.4 3.58 4.61 28.8 0.79 - 1.03 Integrate elements of financial management cycle - - --- Understand different methods for calculating HR requirements 3.42 4.78 1.36 39.6 3.36 4.71 1.33 39.3 3.48 4.88 1.39 39.9 3.24 4.68 1.44 44.4 3.58 4.68 1.30 36.3 - ·- -. .. . -- · . · --- .. - ------- - ...... ·- -- ··-· · --- - . ----· . ---· ----- · -- --- --- ---- - - - - - - Recognize current challenges facing HR management 3.69 4.91 1.22 33.1 3.58 4.86 1.29 36.0 3.76 4.97 1.19 31.5 3.40 4.84 1.44 42.4 3.91 5.00 1.09 27.9 - - - ---- Identify solutions to challenges facing HR management 3.61 4:86 1.25 34.6 3.56 4.75 1.17 32.7 3.67 5.00 1.33 36.2 3.40 4.80 1.40 41 .2 3.71 4.97 1.18 31.1 Understand principles of Quality of Care Development 3.36 4.78 1.42 42.3 3.38 4.88 1.50 44.4 3.39 4.73 1.33 39.2 3.12 4.72 1.60 51.3 3.55 4.85 1.30 36.6 Understand methodology of Quality of Care Development 3.29 4.61 1.32 40.1 3.33 4.63 1.29 38.7 3.30 4.64 1.33 39.2 3.04 4.56 1.52 50.0 3.48 4.70 1.21 34.8 - - Implement Quality of Care Development 3.31 4.46 1.15 34.7 3.33 4.46 1.13 33.9 3.30 4.46 1.16 35.6 3.00 4.36 1.36 45.3 3.55 4.56 1.03 29.0 Page 60 .1... _·,., :-.- .·' , ... ..:.:,•, . •,, ,I '