Sample records for evaluation

  1. An Evaluation System for the Online Training Programs in Meteorology and Hydrology

    ERIC Educational Resources Information Center

    Wang, Yong; Zhi, Xiefei

    2009-01-01

    This paper studies the current evaluation system for the online training program in meteorology and hydrology. CIPP model that includes context evaluation, input evaluation, process evaluation and product evaluation differs from Kirkpatrick model including reactions evaluation, learning evaluation, transfer evaluation and results evaluation in…

  2. AMEE Education Guide no. 29: evaluating educational programmes.

    PubMed

    Goldie, John

    2006-05-01

    Evaluation has become an applied science in its own right in the last 40 years. This guide reviews the history of programme evaluation through its initial concern with methodology, giving way to concern with the context of evaluation practice and into the challenge of fitting evaluation results into highly politicized and decentralized systems. It provides a framework for potential evaluators considering undertaking evaluation. The role of the evaluator; the ethics of evaluation; choosing the questions to be asked; evaluation design, including the dimensions of evaluation and the range of evaluation approaches available to guide evaluators; interpreting and disseminating the findings; and influencing decision making are covered.

  3. Empowerment Evaluation: Yesterday, Today, and Tomorrow

    ERIC Educational Resources Information Center

    Fetterman, David; Wandersman, Abraham

    2007-01-01

    Empowerment evaluation continues to crystallize central issues for evaluators and the field of evaluation. A highly attended American Evaluation Association conference panel, titled "Empowerment Evaluation and Traditional Evaluation: 10 Years Later," provided an opportunity to reflect on the evolution of empowerment evaluation. Several…

  4. Collaborative Evaluation within a Framework of Stakeholder-Oriented Evaluation Approaches

    ERIC Educational Resources Information Center

    O'Sullivan, Rita G.

    2012-01-01

    Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike "distanced" evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff,…

  5. Tracking hand movements captures the response dynamics of the evaluative priming effect.

    PubMed

    Kawakami, Naoaki; Miura, Emi

    2018-06-08

    We tested the response dynamics of the evaluative priming effect (i.e. facilitation of target responses following evaluatively congruent compared with evaluatively incongruent primes) using a mouse tracking procedure that records hand movements during the execution of categorisation tasks. In Experiment 1, when participants performed the evaluative categorisation task but not the non-evaluative semantic categorisation task, their mouse trajectories for evaluatively incongruent trials curved more toward the opposite response than those for evaluatively congruent trials, indicating the emergence of evaluative priming effects based on response competition. In Experiment 2, implementing a task-switching procedure in which evaluative and non-evaluative categorisation tasks were intermixed, we obtained reliable evaluative priming effects in the non-evaluative semantic categorisation task as well as in the evaluative categorisation task when participants assigned attention to the evaluative stimulus dimension. Analyses of hand movements revealed that the evaluative priming effects in the evaluative categorisation task were reflected in the mouse trajectories, while evaluative priming effects in the non-evaluative categorisation tasks were reflected in initiation times (i.e. the time elapsed between target onset and first mouse movement). Based on these findings, we discuss the methodological benefits of the mouse tracking procedure and the underlying processes of evaluative priming effects.

  6. Does Research on Evaluation Matter? Findings from a Survey of American Evaluation Association Members and Prominent Evaluation Theorists and Scholars

    ERIC Educational Resources Information Center

    Coryn, Chris L. S.; Ozeki, Satoshi; Wilson, Lyssa N.; Greenman, Gregory D., II; Schröter, Daniela C.; Hobson, Kristin A.; Azzam, Tarek; Vo, Anne T.

    2016-01-01

    Research on evaluation theories, methods, and practices has increased considerably in the past decade. Even so, little is known about whether published findings from research on evaluation are read by evaluators and whether such findings influence evaluators' thinking about evaluation or their evaluation practice. To address these questions, and…

  7. Empowerment evaluation: An approach that has literally altered the landscape of evaluation.

    PubMed

    Donaldson, Stewart I

    2017-08-01

    The quest for credible and actionable evidence to improve decision making, foster improvement, enhance self-determination, and promote social betterment is now a global phenomenon. Evaluation theorists and practitioners alike have responded to and overcome the challenges that limited the effectiveness and usefulness of traditional evaluation approaches primarily focused on seeking rigorous scientific knowledge about social programs and policies. No modern evaluation approach has received a more robust welcome from stakeholders across the globe than empowerment evaluation. Empowerment evaluation has been a leader in the development of stakeholder involvement approaches to evaluation, setting a high bar. In addition, empowerment evaluation's respect for community knowledge and commitment to the people's right to build their own evaluation capacity has influenced the evaluation mainstream, particularly concerning evaluation capacity building. Empowerment evaluation's most significant contributions to the field have been to improving evaluation use and knowledge utilization. Copyright © 2016. Published by Elsevier Ltd.

  8. Recommendations and Improvements for the Evaluation of Integrated Community-Wide Interventions Approaches.

    PubMed

    van Koperen, Tessa M; Renders, Carry M; Spierings, Eline J M; Hendriks, Anna-Marie; Westerman, Marjan J; Seidell, Jacob C; Schuit, Albertine J

    2016-01-01

    Background . Integrated community-wide intervention approaches (ICIAs) are implemented to prevent childhood obesity. Programme evaluation improves these ICIAs, but professionals involved often struggle with performance. Evaluation tools have been developed to support Dutch professionals involved in ICIAs. It is unclear how useful these tools are to intended users. We therefore researched the facilitators of and barriers to ICIA programme evaluation as perceived by professionals and their experiences of the evaluation tools. Methods . Focus groups and interviews with 33 public health professionals. Data were analysed using a thematic content approach. Findings . Evaluation is hampered by insufficient time, budget, and experience with ICIAs, lack of leadership, and limited advocacy for evaluation. Epidemiologists are regarded as responsible for evaluation but feel incompetent to perform evaluation or advocate its need in a political environment. Managers did not prioritise process evaluations, involvement of stakeholders, and capacity building. The evaluation tools are perceived as valuable but too comprehensive considering limited resources. Conclusion . Evaluating ICIAs is important but most professionals are unfamiliar with it and management does not prioritise process evaluation nor incentivize professionals to evaluate. To optimise programme evaluation, more resources and coaching are required to improve professionals' evaluation capabilities and specifically the use of evaluation.

  9. Recommendations and Improvements for the Evaluation of Integrated Community-Wide Interventions Approaches

    PubMed Central

    Spierings, Eline J. M.; Westerman, Marjan J.; Seidell, Jacob C.; Schuit, Albertine J.

    2016-01-01

    Background. Integrated community-wide intervention approaches (ICIAs) are implemented to prevent childhood obesity. Programme evaluation improves these ICIAs, but professionals involved often struggle with performance. Evaluation tools have been developed to support Dutch professionals involved in ICIAs. It is unclear how useful these tools are to intended users. We therefore researched the facilitators of and barriers to ICIA programme evaluation as perceived by professionals and their experiences of the evaluation tools. Methods. Focus groups and interviews with 33 public health professionals. Data were analysed using a thematic content approach. Findings. Evaluation is hampered by insufficient time, budget, and experience with ICIAs, lack of leadership, and limited advocacy for evaluation. Epidemiologists are regarded as responsible for evaluation but feel incompetent to perform evaluation or advocate its need in a political environment. Managers did not prioritise process evaluations, involvement of stakeholders, and capacity building. The evaluation tools are perceived as valuable but too comprehensive considering limited resources. Conclusion. Evaluating ICIAs is important but most professionals are unfamiliar with it and management does not prioritise process evaluation nor incentivize professionals to evaluate. To optimise programme evaluation, more resources and coaching are required to improve professionals' evaluation capabilities and specifically the use of evaluation. PMID:28116149

  10. A novel resident-as-teacher training program to improve and evaluate obstetrics and gynecology resident teaching skills.

    PubMed

    Ricciotti, Hope A; Dodge, Laura E; Head, Julia; Atkins, K Meredith; Hacker, Michele R

    2012-01-01

    Residents play a significant role in teaching, but formal training, feedback, and evaluation are needed. Our aims were to assess resident teaching skills in the resident-as-teacher program, quantify correlations of faculty evaluations with resident self-evaluations, compare resident-as-teacher evaluations with clinical evaluations, and evaluate the resident-as-teacher program. The resident-as-teacher training program is a simulated, videotaped teaching encounter with a trained medical student and standardized teaching evaluation tool. Evaluations from the resident-as-teacher training program were compared to evaluations of resident teaching done by faculty, residents, and medical students from the clinical setting. Faculty evaluation of resident teaching skills in the resident-as-teacher program showed a mean total score of 4.5 ± 0.5 with statistically significant correlations between faculty assessment and resident self-evaluations (r = 0.47; p < 0.001). However, resident self-evaluation of teaching skill was lower than faculty evaluation (mean difference: 0.4; 95% CI 0.3-0.6). When compared to the clinical setting, resident-as-teacher evaluations were significantly correlated with faculty and resident evaluations, but not medical student evaluations. Evaluations from both the resident-as-teacher program and the clinical setting improved with duration of residency. The resident-as-teacher program provides a method to train, give feedback, and evaluate resident teaching.

  11. Reflections on Empowerment Evaluation: Learning from Experience.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    1999-01-01

    Reflects on empowerment evaluation, the use of evaluation to foster improvement and self-determination. Empowerment evaluation uses quantitative and qualitative methods, and usually focuses on program evaluation. Discusses the growth in empowerment evaluation as a result of interest in participatory evaluation. (SLD)

  12. Methods of Product Evaluation. Guide Number 10. Evaluation Guides Series.

    ERIC Educational Resources Information Center

    St. John, Mark

    In this guide the logic of product evaluation is described in a framework that is meant to be general and adaptable to all kinds of evaluations. Evaluators should consider using the logic and methods of product evaluation when (1) the purpose of the evaluation is to aid evaluators in making a decision about purchases; (2) a comprehensive…

  13. Evaluation of clinical practice guidelines.

    PubMed Central

    Basinski, A S

    1995-01-01

    Compared with the current focus on the development of clinical practice guidelines the effort devoted to their evaluation is meagre. Yet the ultimate success of guidelines depends on routine evaluation. Three types of evaluation are identified: evaluation of guidelines under development and before dissemination and implementation, evaluation of health care programs in which guidelines play a central role, and scientific evaluation, through studies that provide the scientific knowledge base for further evolution of guidelines. Identification of evaluation and program goals, evaluation design and a framework for evaluation planning are discussed. PMID:7489550

  14. Timing of Emergency Medicine Student Evaluation Does Not Affect Scoring.

    PubMed

    Hiller, Katherine M; Waterbrook, Anna; Waters, Kristina

    2016-02-01

    Evaluation of medical students rotating through the emergency department (ED) is an important formative and summative assessment method. Intuitively, delaying evaluation should affect the reliability of this assessment method, however, the effect of evaluation timing on scoring is unknown. A quality-improvement project evaluating the timing of end-of-shift ED evaluations at the University of Arizona was performed to determine whether delay in evaluation affected the score. End-of-shift ED evaluations completed on behalf of fourth-year medical students from July 2012 to March 2013 were reviewed. Forty-seven students were evaluated 547 times by 46 residents and attendings. Evaluation scores were means of anchored Likert scales (1-5) for the domains of energy/interest, fund of knowledge, judgment/problem-solving ability, clinical skills, personal effectiveness, and systems-based practice. Date of shift, date of evaluation, and score were collected. Linear regression was performed to determine whether timing of the evaluation had an effect on evaluation score. Data were complete for 477 of 547 evaluations (87.2%). Mean evaluation score was 4.1 (range 2.3-5, standard deviation 0.62). Evaluations took a mean of 8.5 days (median 4 days, range 0-59 days, standard deviation 9.77 days) to complete. Delay in evaluation had no significant effect on score (p = 0.983). The evaluation score was not affected by timing of the evaluation. Variance in scores was similar for both immediate and delayed evaluations. Considerable amounts of time and energy are expended tracking down delayed evaluations. This activity does not impact a student's final grade. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Government and voluntary sector differences in organizational capacity to do and use evaluation.

    PubMed

    Cousins, J Bradley; Goh, Swee C; Elliott, Catherine; Aubry, Tim; Gilbert, Nathalie

    2014-06-01

    Research on evaluation capacity is limited although a recent survey article on integrating evaluation into the organizational culture (Cousins, Goh, Clark, & Lee, 2004) revealed that interest in the topic is increasing. While knowledge about building the capacity to do evaluation has developed considerably, less is understood about building the organizational capacity to use evaluation. This article reports on the results of a pan-Canadian survey of evaluators working in organizations (internal evaluators or organization members with evaluation responsibility) conducted in 2007. Reliability across all constructs was high. Responses from government evaluators (N=160) were compared to responses from evaluators who work in the voluntary sector (N=89). The former were found to self-identify more highly as 'evaluators' (specialists) whereas the latter tended to identify as 'managers' (non-specialists). As a result, government evaluators had significantly higher self-reported levels of evaluation knowledge (both theory and practice); and they spent more time performing evaluation functions. However, irrespective of role, voluntary sector respondents rated their organizations more favorably than did their government sector counterparts with respect to the antecedents or conditions supporting evaluation capacity, and the capacity to use evaluation. Results are discussed in terms of their implications for evaluation practice and ongoing research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. The current status of theory evaluation in nursing.

    PubMed

    Im, Eun-Ok

    2015-10-01

    To identify the current status of theory evaluation in nursing and provide directions for theory evaluation for future development of theoretical bases of nursing discipline. Theory evaluation is an essential component in development of nursing knowledge, which is a critical element in development of nursing discipline. Despite earlier significant efforts for theory evaluation in nursing, a recent decline in the number of theory evaluation articles was noted and there have been few updates on theory evaluation in nursing. Discussion paper. A total of 58 articles published from 2003-2014 were retrieved through searches using the PUBMED, PsyInfo and CINAHL. The articles were sorted by the area of evaluation and analysed to identify themes reflecting the theory evaluation process. Diverse ways of theory evaluation need to be continuously used in future theory evaluation efforts. Six themes reflecting the theory evaluation process were identified: (a) rarely using existing theory evaluation criteria; (b) evaluating specifics; (c) using various statistical analysis methods; (d) developing instruments; (e) adopting in practice and education; and (f) evaluating mainly middle-range theories and situation-specific theories. © 2015 John Wiley & Sons Ltd.

  17. Meta-Evaluation

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.

    2011-01-01

    Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…

  18. Bringing Evaluative Learning to Life

    ERIC Educational Resources Information Center

    King, Jean A.

    2008-01-01

    This excerpt from the opening plenary asks evaluators to consider two questions regarding learning and evaluation: (a) How do evaluators know if, how, when, and what people are learning during an evaluation? and (b) In what ways can evaluation be a learning experience? To answer the first question, evaluators can apply the commonplaces of…

  19. The Evaluation Handbook: Guidelines for Evaluating Dropout Prevention Programs.

    ERIC Educational Resources Information Center

    Smink, Jay; Stank, Peg

    This manual, developed in an effort to take the mysticism out of program evaluation, discusses six phases of the program evaluation process. The introduction discusses reasons for evaluation, process and outcome evaluation, the purpose of the handbook, the evaluation process, and the Sequoia United School District Dropout Prevention Program. Phase…

  20. Teacher Education Program Evaluation: An Annotated Bibliography and Guide to Research.

    ERIC Educational Resources Information Center

    Ayers, Jerry B.; Berney, Mary F.

    This book includes an annotated bibliography of the essentials needed to conduct an effective evaluation of a teacher education program. Specific information on evaluation includes: (1) general evaluation techniques, (2) evaluation of candidates and students, (3) evaluation of the knowledge base, (4) quality controls, (5) evaluation of laboratory…

  1. The Influence of Evaluators' Principles on Evaluation Resource Decisions

    ERIC Educational Resources Information Center

    Crohn, Kara Shea Davis

    2009-01-01

    This study examines ways in which evaluators' principles influence decisions about evaluation resources. Evaluators must seek-out and allocate (often scarce) resources (e.g., money, time, data, people, places) in a way that allows them to conduct the best possible evaluation given clients' and evaluation participants' constraints. Working within…

  2. 38 CFR 21.6052 - Evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Evaluations. 21.6052... Recipients Evaluation § 21.6052 Evaluations. (a) Scope and nature of evaluation. The scope and nature of the evaluation under this program shall be the same as for an evaluation of the reasonable feasibility of...

  3. The Use of Multiple Evaluation Approaches in Program Evaluation

    ERIC Educational Resources Information Center

    Bledsoe, Katrina L.; Graham, James A.

    2005-01-01

    The authors discuss the use of multiple evaluation approaches in conducting program evaluations. Specifically, they illustrate four evaluation approaches (theory-driven, consumer-based, empowerment, and inclusive evaluation) and briefly discuss a fifth (use-focused evaluation) as a side effect of the use of the others. The authors also address the…

  4. The Art of Evaluation: A Handbook for Educators and Trainers.

    ERIC Educational Resources Information Center

    Fenwick, Tara J.; Parsons, Jim

    This book introduces adult educators and trainers to the principles and techniques of learner evaluation in the various contexts of adult education. The following are among the topics discussed: (1) the purposes of evaluation (the importance of authentic evaluation; principles of evaluation; traps in evaluation); (2) evaluating one's philosophy…

  5. A merged model of quality improvement and evaluation: maximizing return on investment.

    PubMed

    Woodhouse, Lynn D; Toal, Russ; Nguyen, Trang; Keene, DeAnna; Gunn, Laura; Kellum, Andrea; Nelson, Gary; Charles, Simone; Tedders, Stuart; Williams, Natalie; Livingood, William C

    2013-11-01

    Quality improvement (QI) and evaluation are frequently considered to be alternative approaches for monitoring and assessing program implementation and impact. The emphasis on third-party evaluation, particularly associated with summative evaluation, and the grounding of evaluation in the social and behavioral science contrast with an emphasis on the integration of QI process within programs or organizations and its origins in management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control sites serving poor and underserved communities in rural and urban Georgia. A primary foundation of this merged model of QI and evaluation is a refocusing of the evaluation from an intimidating report card summative evaluation by external evaluators to an internally engaged program focus on developmental evaluation. The benefits of the merged model to both QI and evaluation are discussed. The use of evaluation based logic models can help anchor a QI program in evidence-based practice and provide linkage between process and outputs with the longer term distal outcomes. Merging the QI approach with evaluation has major advantages, particularly related to enhancing the funder's return on investment. We illustrate how a Plan-Do-Study-Act model of QI can (a) be integrated with evaluation based logic models, (b) help refocus emphasis from summative to developmental evaluation, (c) enhance program ownership and engagement in evaluation activities, and (d) increase the role of evaluators in providing technical assistance and support.

  6. Barriers to and Facilitators of the Evaluation of Integrated Community-Wide Overweight Intervention Approaches: A Qualitative Case Study in Two Dutch Municipalities

    PubMed Central

    van Koperen, Tessa M.; de Kruif, Anja; van Antwerpen, Lisa; Hendriks, Anna-Marie; Seidell, Jacob C.; Schuit, Albertine J.; Renders, Carry M.

    2016-01-01

    To prevent overweight and obesity the implementation of an integrated community-wide intervention approach (ICIA) is often advocated. Evaluation can enhance implementation of such an approach and demonstrate the extent of effectiveness. To be able to support professionals in the evaluation of ICIAs we studied barriers to and facilitators of ICIA evaluation. In this study ten professionals of two Dutch municipalities involved in the evaluation of an ICIA participated. We conducted semi-structured interviews (n = 12), observed programme meetings (n = 4) and carried out document analysis. Data were analyzed using a thematic content approach. We learned that evaluation is hampered when it is perceived as unfeasible due to limited time and budget, a lack of evaluation knowledge or a negative evaluation attitude. Other barriers are a poor understanding of the evaluation process and its added value to optimizing the programme. Sufficient communication between involved professionals on evaluation can facilitate evaluation, as does support for evaluation of ICIAs together with stakeholders at a strategic and tactical level. To stimulate the evaluation of ICIAs, we recommend supporting professionals in securing evaluation resources, providing tailored training and tools to enhance evaluation competences and stimulating strategic communication on evaluation. PMID:27043600

  7. Barriers to and Facilitators of the Evaluation of Integrated Community-Wide Overweight Intervention Approaches: A Qualitative Case Study in Two Dutch Municipalities.

    PubMed

    van Koperen, Tessa M; de Kruif, Anja; van Antwerpen, Lisa; Hendriks, Anna-Marie; Seidell, Jacob C; Schuit, Albertine J; Renders, Carry M

    2016-03-31

    To prevent overweight and obesity the implementation of an integrated community-wide intervention approach (ICIA) is often advocated. Evaluation can enhance implementation of such an approach and demonstrate the extent of effectiveness. To be able to support professionals in the evaluation of ICIAs we studied barriers to and facilitators of ICIA evaluation. In this study ten professionals of two Dutch municipalities involved in the evaluation of an ICIA participated. We conducted semi-structured interviews (n = 12), observed programme meetings (n = 4) and carried out document analysis. Data were analyzed using a thematic content approach. We learned that evaluation is hampered when it is perceived as unfeasible due to limited time and budget, a lack of evaluation knowledge or a negative evaluation attitude. Other barriers are a poor understanding of the evaluation process and its added value to optimizing the programme. Sufficient communication between involved professionals on evaluation can facilitate evaluation, as does support for evaluation of ICIAs together with stakeholders at a strategic and tactical level. To stimulate the evaluation of ICIAs, we recommend supporting professionals in securing evaluation resources, providing tailored training and tools to enhance evaluation competences and stimulating strategic communication on evaluation.

  8. Non-Deployable Soldiers: Understanding the Army’s Challenge

    DTIC Science & Technology

    2011-05-07

    TERMS Medically Not Ready (MNR), Warrior Transition Unit (WTU), Disability Evaluation System (DES), Physical Evaluation Board (PEB), Medical Evaluation... Board (MEB), MOS Medical Retention Board (MMRB), Human Capital Enterprise, Personnel Management, Physical Evaluations System. 16. SECURITY...Medically Not Ready (MNR), Warrior Transition Unit (WTU), Disability Evaluation System (DES), Physical Evaluation Board (PEB), Medical Evaluation

  9. When Mode Does Not Matter: Evaluation in Class versus Out of Class

    ERIC Educational Resources Information Center

    Kordts-Freudinger, Robert; Geithner, Eva

    2013-01-01

    This article investigates if online evaluation leads to different results than paper-and-pencil evaluation. Given that most previous studies confound the evaluation mode (online versus paper) with the evaluation situation (in class versus after class), we expected that evaluation results would be influenced only by the evaluation situation,…

  10. Evaluator and Program Manager Perceptions of Evaluation Capacity and Evaluation Practice

    ERIC Educational Resources Information Center

    Fierro, Leslie A.; Christie, Christina A.

    2017-01-01

    The evaluation community has demonstrated an increased emphasis and interest in evaluation capacity building in recent years. A need currently exists to better understand how to measure evaluation capacity and its potential outcomes. In this study, we distributed an online questionnaire to managers and evaluation points of contact working in…

  11. Documenting Evaluation Use: Guided Evaluation Decisionmaking. Evaluation Productivity Project.

    ERIC Educational Resources Information Center

    Burry, James

    This paper documents the evaluation use process among districts using the Guide for Evaluation Decision Makers, published by the Center for the Study of Evaluation (CSE) during the 1984-85 school year. Included are the following: (1) a discussion of research that led to conclusions concerning the administrator's role in evaluation use; (2) a…

  12. Toward Better Research on--and Thinking about--Evaluation Influence, Especially in Multisite Evaluations

    ERIC Educational Resources Information Center

    Mark, Melvin M.

    2011-01-01

    Evaluation is typically carried out with the intention of making a difference in the understandings and actions of stakeholders and decision makers. The author provides a general review of the concepts of evaluation "use," evaluation "influence," and "influence pathways," with connections to multisite evaluations. The study of evaluation influence…

  13. Informing the Discussion on Evaluator Training: A Look at Evaluators' Course Taking and Professional Practice

    ERIC Educational Resources Information Center

    Christie, Christina A.; Quiñones, Patricia; Fierro, Leslie

    2014-01-01

    This classification study examines evaluators' coursework training as a way of understanding evaluation practice. Data regarding courses that span methods and evaluation topics were collected from evaluation practitioners. Using latent class analysis, we establish four distinct classes of evaluator course-taking patterns: quantitative,…

  14. Improving Beta Test Evaluation Response Rates: A Meta-Evaluation

    ERIC Educational Resources Information Center

    Russ-Eft, Darlene; Preskill, Hallie

    2005-01-01

    This study presents a meta-evaluation of a beta-test of a customer service training program. The initial evaluation showed a low response rate. Therefore, the meta-evaluation focused on issues related to the conduct of the initial evaluation and reasons for nonresponse. The meta-evaluation identified solutions to the nonresponse problem as related…

  15. Nurturing Professional Growth: A Peer Review Model for Independent Evaluators

    ERIC Educational Resources Information Center

    Bond, Sally L.; Ray, Marilyn L.

    2006-01-01

    There has been a recent groundswell of support in the American Evaluation Association's Independent Consulting Topical Interest Group (IC TIG) for evaluating evaluators' work just as evaluators evaluate the work of their clients. To facilitate this self-evaluation, the IC TIG elected to create a peer review process that focuses on written…

  16. Does sunshine prime loyal … or summer? Effects of associative relatedness on the evaluative priming effect in the valent/neutral categorisation task.

    PubMed

    Werner, Benedikt; von Ramin, Elisabeth; Spruyt, Adriaan; Rothermund, Klaus

    2018-02-01

    After 30 years of research, the mechanisms underlying the evaluative priming effect are still a topic of debate. In this study, we tested whether the evaluative priming effect can result from (uncontrolled) associative relatedness rather than evaluative congruency. Stimuli that share the same evaluative connotation are more likely to show some degree of non-evaluative associative relatedness than stimuli that have a different evaluative connotation. Therefore, unless associative relatedness is explicitly controlled for, evaluative priming effects reported in earlier research may be driven by associative relatedness instead of evaluative relatedness. To address this possibility, we performed an evaluative priming study in which evaluative congruency and associative relatedness were manipulated independently from each other. The valent/neutral categorisation task was used to ensure evaluative stimulus processing in the absence of response priming effects. Results showed an effect of associative relatedness but no (overall) effect of evaluative congruency. Our findings highlight the importance of controlling for associative relatedness when testing for evaluative priming effects.

  17. Evaluator competencies in the context of diversity training: The practitioners' point of view.

    PubMed

    Froncek, Benjamin; Mazziotta, Agostino; Piper, Verena; Rohmann, Anette

    2018-04-01

    Evaluator competencies have been discussed since the beginnings of program evaluation literature. More recently, the Essential Competencies for Program Evaluators (Ghere et al., 2006; Stevahn, King, Ghere & Minnema, 2005a) have proven to be a useful taxonomy for learning and improving evaluation practice. Evaluation is critical to diversity training activities, and diversity training providers face the challenge of conducting evaluations of their training programs. We explored what competencies are viewed as instrumental to conducting useful evaluations in this specific field of evaluation practice. In an online survey, N = 172 diversity training providers were interviewed via an open answer format about their perceptions of evaluator competencies, with n = 95 diversity training providers contributing statements. The Essential Competencies for Program Evaluators were used to conduct a deductive qualitative content analysis of the statements. While systematic inquiry, reflective practice, and interpersonal competence were well represented, situational analysis and project management were not. Implications are discussed for evaluation capacity building among diversity training providers and for negotiating evaluation projects with evaluation professionals. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Attitudes toward evaluation: An exploratory study of students' and stakeholders' social representations.

    PubMed

    Schultes, Marie-Therese; Kollmayer, Marlene; Mejeh, Mathias; Spiel, Christiane

    2018-06-15

    Positive attitudes toward evaluation among stakeholders are an important precondition for successful evaluation processes. However, empirical studies focusing on stakeholders' attitudes toward evaluation are scarce. The present paper explores the approach of assessing social representations as indicators of people's attitudes toward evaluation. In an exploratory study, two groups were surveyed: University students (n = 60) with rather theoretical knowledge of evaluation and stakeholders (n = 61) who had shortly before taken part in participatory evaluation studies. Both groups were asked to name their free associations with the term "evaluation", which were subsequently analyzed lexicographically. The results indicate different social representations of evaluation in the two groups. The student group primarily saw evaluation as an "appraisal", whereas the stakeholders emphasized the "improvement" resulting from evaluation. Implications for further evaluation research and practice are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Evaluation and its importance for nursing practice.

    PubMed

    Moule, Pam; Armoogum, Julie; Douglass, Emma; Taylor, Dr Julie

    2017-04-26

    Evaluation of service delivery is an important aspect of nursing practice. Service evaluation is being increasingly used and led by nurses, who are well placed to evaluate service and practice delivery. This article defines evaluation of services and wider care delivery and its relevance in NHS practice and policy. It aims to encourage nurses to think about how evaluation of services or practice differs from research and audit activity and to consider why and how they should use evaluation in their practice. A process for planning and conducting an evaluation and disseminating findings is presented. Evaluation in the healthcare context can be a complicated activity and some of the potential challenges of evaluation are described, alongside possible solutions. Further resources and guidance on evaluation activity to support nurses' ongoing development are identified.

  20. Evaluation readiness: improved evaluation planning using a data inventory framework.

    PubMed

    Cohen, A B; Hall, K C; Cohodes, D R

    1985-01-01

    Factors intrinsic to many programs, such as ambiguously stated objectives, inadequately defined performance measures, and incomplete or unreliable databases, often conspire to limit the evaluability of these programs. Current evaluation planning approaches are somewhat constrained in their ability to overcome these obstacles and to achieve full preparedness for evaluation. In this paper, the concept of evaluation readiness is introduced as a complement to other evaluation planning approaches, most notably that of evaluability assessment. The basic products of evaluation readiness--the formal program definition and the data inventory framework--are described, along with a guide for assuring more timely and appropriate evaluation response capability to support the decision making needs of program managers. The utility of evaluation readiness for program planning, as well as for effective management, is also discussed.

  1. EVALUE : a computer program for evaluating investments in forest products industries

    Treesearch

    Peter J. Ince; Philip H. Steele

    1980-01-01

    EVALUE, a FORTRAN program, was developed to provide a framework for cash flow analysis of investment opportunities. EVALUE was designed to assist researchers in evaluating investment feasibility of new technology or new manufacturing processes. This report serves as user documentation for the EVALUE program. EVALUE is briefly described and notes on preparation of a...

  2. Toward a Collective Approach to Course Evaluation in Curriculum Development, A Contemporary Perspective

    ERIC Educational Resources Information Center

    Nyabero, Charles

    2016-01-01

    The purpose of this article was to explore on how course evaluation, decision making process, the methodology of evaluation and various roles of evaluation interact in the process of curriculum development. In the process of this exploration, the characteristics the types of evaluation, purposes of course evaluation, methodology of evaluation,…

  3. Using Evaluability Assessment to Improve Program Evaluation for the Blue-Throated Macaw Environmental Education Project in Bolivia

    ERIC Educational Resources Information Center

    Salvatierra da Silva, Daniela; Jacobson, Susan K.; Monroe, Martha C.; Israel, Glenn D.

    2016-01-01

    An evaluability assessment of a program to save a critically endangered bird helped prepare the Blue-throated Macaw Environmental Education Project for evaluation and program improvement. The evaluability assessment facilitated agreement among key stakeholders on evaluation criteria and intended uses of evaluation information in order to maximize…

  4. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and design...

  5. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Performance Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and...

  6. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Performance Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and...

  7. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Performance Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and...

  8. 40 CFR Table 6 to Subpart Wwww of... - Basic Requirements for Performance Tests, Performance Evaluations, and Design Evaluations for New...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control... Tests, Performance Evaluations, and Design Evaluations for New and Existing Sources Using Add-On Control Devices As required in § 63.5850 you must conduct performance tests, performance evaluations, and design...

  9. How Do You Evaluate Everyone Who Isn't a Teacher? An Adaptable Evaluation Model for Professional Support Personnel.

    ERIC Educational Resources Information Center

    Stronge, James H.; And Others

    The evaluation of professional support personnel in the schools has been a neglected area in educational evaluation. The Center for Research on Educational Accountability and Teacher Evaluation (CREATE) has worked to develop a conceptually sound evaluation model and then to translate the model into practical evaluation procedures that facilitate…

  10. L'evaluation des politiques institutionnelles d'evaluation des apprentissages. Rapport synthese (The Evaluation of Institutional Policies of Evaluation of Learning. Synthesis Report). 2410-0520.

    ERIC Educational Resources Information Center

    Lindfelt, Bengt, Ed.

    In accordance with provincial educational regulations, Quebec's community colleges have adopted "politiques institutionnelles d'evaluation des apprentissages" (PIEA), or institutional policies of the evaluation of learning. This report provides a synthesis of evaluations of the PIEA conducted by the province's Commission on the…

  11. Aligning Collaborative and Culturally Responsive Evaluation Approaches

    ERIC Educational Resources Information Center

    Askew, Karyl; Beverly, Monifa Green; Jay, Michelle L.

    2012-01-01

    The authors, three African-American women trained as collaborative evaluators, offer a comparative analysis of collaborative evaluation (O'Sullivan, 2004) and culturally responsive evaluation approaches (Frierson, Hood, & Hughes, 2002; Kirkhart & Hopson, 2010). Collaborative evaluation techniques immerse evaluators in the cultural milieu…

  12. Adolescents' explicit and implicit evaluations of hypothetical and actual peers with different bullying participant roles.

    PubMed

    Pouwels, J Loes; Lansu, Tessa A M; Cillessen, Antonius H N

    2017-07-01

    This study examined how adolescents evaluate bullying at three levels of specificity: (a) the general concept of bullying, (b) hypothetical peers in different bullying participant roles, and (c) actual peers in different bullying participant roles. Participants were 163 predominantly ethnic majority adolescents in The Netherlands (58% girls; M age =16.34years, SD=0.79). For the hypothetical peers, we examined adolescents' explicit evaluations as well as their implicit evaluations. Adolescents evaluated the general concept of bullying negatively. Adolescents' explicit evaluations of hypothetical and actual peers in the bullying roles depended on their own role, but adolescents' implicit evaluations of hypothetical peers did not. Adolescents' explicit evaluations of hypothetical peers and actual peers were different. Hypothetical bullies were evaluated negatively by all classmates, whereas hypothetical victims were evaluated relatively positively compared with the other roles. However, when adolescents evaluated their actual classmates, the differences between bullies and the other roles were smaller, whereas victims were evaluated the most negatively of all roles. Further research should take into account that adolescents' evaluations of hypothetical peers differ from their evaluations of actual peers. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Developing Your Evaluation Plans: A Critical Component of Public Health Program Infrastructure.

    PubMed

    Lavinghouze, S Rene; Snyder, Kimberly

    A program's infrastructure is often cited as critical to public health success. The Component Model of Infrastructure (CMI) identifies evaluation as essential under the core component of engaged data. An evaluation plan is a written document that describes how to monitor and evaluate a program, as well as how to use evaluation results for program improvement and decision making. The evaluation plan clarifies how to describe what the program did, how it worked, and why outcomes matter. We use the Centers for Disease Control and Prevention's (CDC) "Framework for Program Evaluation in Public Health" as a guide for developing an evaluation plan. Just as using a roadmap facilitates progress on a long journey, a well-written evaluation plan can clarify the direction your evaluation takes and facilitate achievement of the evaluation's objectives.

  14. [Assessment of research papers in medical university staff evaluation].

    PubMed

    Zhou, Qing-hui

    2012-06-01

    Medical university staff evaluation is a substantial branch of education administration for medical university. Output number of research papers as a direct index reflecting the achievements in academic research, plays an important role in academic research evaluation. Another index, influence of the research paper, is an indirect index for academic research evaluation. This paper mainly introduced some commonly used indexes in evaluation of academic research papers currently, and analyzed the applicability and limitation of each index. The author regards that academic research evaluation in education administration, which is mainly based on evaluation of academic research papers, should combine the evaluation of journals where the papers are published with peer review of the papers, and integrate qualitative evaluation with quantitative evaluation, for the purpose of setting up an objective academic research evaluation system for medical university staff.

  15. Evaluation: Review of the Past, Preview of the Future.

    ERIC Educational Resources Information Center

    Smith, M. F.

    1994-01-01

    This paper summarized contributors' ideas about evaluation as a field and where it is going. Topics discussed were qualitative versus quantitative debate; evaluation's purpose; professionalization; program failure; program development; evaluators as advocates; evaluation knowledge; evaluation expansion; and methodology and design. (SLD)

  16. 48 CFR 45.202 - Evaluation procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation procedures. 45... MANAGEMENT GOVERNMENT PROPERTY Solicitation and Evaluation Procedures 45.202 Evaluation procedures. (a) The... evaluation purposes only, a rental equivalent evaluation factor. (b) The contracting officer shall ensure the...

  17. 34 CFR 300.304 - Evaluation procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Educational Placements Evaluations and Reevaluations § 300.304 Evaluation procedures. (a) Notice. The public... conducting the evaluation, the public agency must— (1) Use a variety of assessment tools and strategies to... evaluation procedures. Each public agency must ensure that— (1) Assessments and other evaluation materials...

  18. Experiments in evaluation capacity building: Enhancing brain disorders research impact in Ontario.

    PubMed

    Nylen, Kirk; Sridharan, Sanjeev

    2017-05-08

    This paper is the introductory paper on a forum on evaluation capacity building for enhancing impacts of research on brain disorders. It describes challenges and opportunities of building evaluation capacity among community-based organizations in Ontario involved in enhancing brain health and supporting people living with a brain disorder. Using an example of a capacity building program called the "Evaluation Support Program", which is run by the Ontario Brain Institute, this forum discusses multiple themes including evaluation capacity building, evaluation culture and evaluation methodologies appropriate for evaluating complex community interventions. The goal of the Evaluation Support Program is to help community-based organizations build the capacity to demonstrate the value that they offer in order to improve, sustain, and spread their programs and activities. One of the features of this forum is that perspectives on the Evaluation Support Program are provided by multiple stakeholders, including the community-based organizations, evaluation team members involved in capacity building, thought leaders in the fields of evaluation capacity building and evaluation culture, and the funders. Copyright © 2017. Published by Elsevier Ltd.

  19. Evaluation of competence-based teaching in higher education: From theory to practice.

    PubMed

    Bergsmann, Evelyn; Schultes, Marie-Therese; Winter, Petra; Schober, Barbara; Spiel, Christiane

    2015-10-01

    Competence-based teaching in higher education institutions and its evaluation have become a prevalent topic especially in the European Union. However, evaluation instruments are often limited, for example to single student competencies or specific elements of the teaching process. The present paper provides a more comprehensive evaluation concept that contributes to sustainable improvement of competence-based teaching in higher education institutions. The evaluation concept considers competence research developments as well as the participatory evaluation approach. The evaluation concept consists of three stages. The first stage evaluates whether the competencies students are supposed to acquire within the curriculum (ideal situation) are well defined. The second stage evaluates the teaching process and the competencies students have actually acquired (real situation). The third stage evaluates concrete aspects of the teaching process. Additionally, an implementation strategy is introduced to support the transfer from the theoretical evaluation concept to practice. The evaluation concept and its implementation strategy are designed for internal evaluations in higher education and primarily address higher education institutions that have already developed and conducted a competence-based curriculum. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Application of a responsive evaluation approach in medical education.

    PubMed

    Curran, Vernon; Christopher, Jeanette; Lemire, Francine; Collins, Alice; Barrett, Brendan

    2003-03-01

    This paper reports on the usefulness of a responsive evaluation model in evaluating the clinical skills assessment and training (CSAT) programme at the Faculty of Medicine, Memorial University of Newfoundland, Canada. The purpose of this paper is to introduce the responsive evaluation approach, ascertain its utility, feasibility, propriety and accuracy in a medical education context, and discuss its applicability as a model for medical education programme evaluation. Robert Stake's original 12-step responsive evaluation model was modified and reduced to five steps, including: (1) stakeholder audience identification, consultation and issues exploration; (2) stakeholder concerns and issues analysis; (3) identification of evaluative standards and criteria; (4) design and implementation of evaluation methodology; and (5) data analysis and reporting. This modified responsive evaluation process was applied to the CSAT programme and a meta-evaluation was conducted to evaluate the effectiveness of the approach. The responsive evaluation approach was useful in identifying the concerns and issues of programme stakeholders, solidifying the standards and criteria for measuring the success of the CSAT programme, and gathering rich and descriptive evaluative information about educational processes. The evaluation was perceived to be human resource dependent in nature, yet was deemed to have been practical, efficient and effective in uncovering meaningful and useful information for stakeholder decision-making. Responsive evaluation is derived from the naturalistic paradigm and concentrates on examining the educational process rather than predefined outcomes of the process. Responsive evaluation results are perceived as having more relevance to stakeholder concerns and issues, and therefore more likely to be acted upon. Conducting an evaluation that is responsive to the needs of these groups will ensure that evaluative information is meaningful and more likely to be used for programme enhancement and improvement.

  1. Clinical Performance Evaluations of Third-Year Medical Students and Association With Student and Evaluator Gender.

    PubMed

    Riese, Alison; Rappaport, Leah; Alverson, Brian; Park, Sangshin; Rockney, Randal M

    2017-06-01

    Clinical performance evaluations are major components of medical school clerkship grades. But are they sufficiently objective? This study aimed to determine whether student and evaluator gender is associated with assessment of overall clinical performance. This was a retrospective analysis of 4,272 core clerkship clinical performance evaluations by 829 evaluators of 155 third-year students, within the Alpert Medical School grading database for the 2013-2014 academic year. Overall clinical performance, assessed on a three-point scale (meets expectations, above expectations, exceptional), was extracted from each evaluation, as well as evaluator gender, age, training level, department, student gender and age, and length of observation time. Hierarchical ordinal regression modeling was conducted to account for clustering of evaluations. Female students were more likely to receive a better grade than males (adjusted odds ratio [AOR] 1.30, 95% confidence interval [CI] 1.13-1.50), and female evaluators awarded lower grades than males (AOR 0.72, 95% CI 0.55-0.93), adjusting for department, observation time, and student and evaluator age. The interaction between student and evaluator gender was significant (P = .03), with female evaluators assigning higher grades to female students, while male evaluators' grading did not differ by student gender. Students who spent a short time with evaluators were also more likely to get a lower grade. A one-year examination of all third-year clerkship clinical performance evaluations at a single institution revealed that male and female evaluators rated male and female students differently, even when accounting for other measured variables.

  2. Health services research evaluation principles. Broadening a general framework for evaluating health information technology.

    PubMed

    Sockolow, P S; Crawford, P R; Lehmann, H P

    2012-01-01

    Our forthcoming national experiment in increased health information technology (HIT) adoption funded by the American Recovery and Reinvestment Act of 2009 will require a comprehensive approach to evaluating HIT. The quality of evaluation studies of HIT to date reveals a need for broader evaluation frameworks that limits the generalizability of findings and the depth of lessons learned. Develop an informatics evaluation framework for health information technology (HIT) integrating components of health services research (HSR) evaluation and informatics evaluation to address identified shortcomings in available HIT evaluation frameworks. A systematic literature review updated and expanded the exhaustive review by Ammenwerth and deKeizer (AdK). From retained studies, criteria were elicited and organized into classes within a framework. The resulting Health Information Technology Research-based Evaluation Framework (HITREF) was used to guide clinician satisfaction survey construction, multi-dimensional analysis of data, and interpretation of findings in an evaluation of a vanguard community health care EHR. The updated review identified 128 electronic health record (EHR) evaluation studies and seven evaluation criteria not in AdK: EHR Selection/Development/Training; Patient Privacy Concerns; Unintended Consequences/ Benefits; Functionality; Patient Satisfaction with EHR; Barriers/Facilitators to Adoption; and Patient Satisfaction with Care. HITREF was used productively and was a complete evaluation framework which included all themes that emerged. We can recommend to future EHR evaluators that they consider adding a complete, research-based HIT evaluation framework, such as HITREF, to their evaluation tools suite to monitor HIT challenges as the federal government strives to increase HIT adoption.

  3. School Evaluation and Accreditation: A Bibliography of Research Studies.

    ERIC Educational Resources Information Center

    Diamond, Joan

    1982-01-01

    This 97-item bibliography cites research in the following categories: purposes and structures of school accreditation/evaluation; the school evaluation process, involving self-study, team visits, and implementation; evaluation of the accreditation/evaluation process; external factors influencing school accreditation/evaluation; and objectivity in…

  4. The Spiral-Interactive Program Evaluation Model.

    ERIC Educational Resources Information Center

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  5. 48 CFR 215.305 - Proposal evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....305 Proposal evaluation. (a)(2) Past performance evaluation. When a past performance evaluation is... Business Concerns, the evaluation factors shall include the past performance of offerors in complying with requirements of that clause. When a past performance evaluation is required by FAR 15.304, and the solicitation...

  6. 48 CFR 315.305 - Proposal evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... following elements: (1) An explanation of the evaluation process and the role of evaluators throughout the... include, at a minimum, the following elements: (1) A list of recommended technical evaluation panel... that the technical evaluation will have in the award decision. (2) The technical evaluation process...

  7. The Practice of Health Program Evaluation.

    PubMed

    Lewis, Sarah R

    2017-11-01

    The Practice of Health Program Evaluation provides an overview of the evaluation process for public health programs while diving deeper to address select advanced concepts and techniques. The book unfolds evaluation as a three-phased process consisting of identification of evaluation questions, data collection and analysis, and dissemination of results and recommendations. The text covers research design, sampling methods, as well as quantitative and qualitative approaches. Types of evaluation are also discussed, including economic assessment and systems research as relative newcomers. Aspects critical to conducting a successful evaluation regardless of type or research design are emphasized, such as stakeholder engagement, validity and reliability, and adoption of sound recommendations. The book encourages evaluators to document their approach by developing an evaluation plan, a data analysis plan, and a dissemination plan, in order to help build consensus throughout the process. The evaluative text offers a good bird's-eye view of the evaluation process, while offering guidance for evaluation experts on how to navigate political waters and advocate for their findings to help affect change.

  8. La peur de l'evaluation: evaluation de l'enseignement ou du sujet? (Fear of Evaluation: Evaluating the Teacher or the Subject?)

    ERIC Educational Resources Information Center

    Kosmidou-Hardy, Chryssoula; Marmarinos, Jean

    2001-01-01

    Addresses questions related to the evaluation of teachers, with specific attention to why there is such teacher resistance. Theorizes that it is the teachers' fear of evaluation of their personal identity rather than their professional competence that lies behind their resistance to evaluation. Calls for the use of action research as a basic…

  9. Non-formal educator use of evaluation results.

    PubMed

    Baughman, Sarah; Boyd, Heather H; Franz, Nancy K

    2012-08-01

    Increasing demands for accountability in educational programming have resulted in increasing calls for program evaluation in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a complex organization offering non-formal educational programs through land grant universities. Many Extension services require non-formal educational program evaluations be conducted by field-based Extension educators. Evaluation research has focused primarily on the efforts of professional, external evaluators. The work of program staff with many responsibilities including program evaluation has received little attention. This study examined how field based Extension educators (i.e. program staff) in four Extension services use the results of evaluations of programs that they have conducted themselves. Four types of evaluation use are measured and explored; instrumental use, conceptual use, persuasive use and process use. Results indicate that there are few programmatic changes as a result of evaluation findings among the non-formal educators surveyed in this study. Extension educators tend to use evaluation results to persuade others about the value of their programs and learn from the evaluation process. Evaluation use is driven by accountability measures with very little program improvement use as measured in this study. Practical implications include delineating accountability and program improvement tasks within complex organizations in order to align evaluation efforts and to improve the results of both. There is some evidence that evaluation capacity building efforts may be increasing instrumental use by educators evaluating their own programs. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. A new evaluation tool to obtain practice-based evidence of worksite health promotion programs.

    PubMed

    Dunet, Diane O; Sparling, Phillip B; Hersey, James; Williams-Piehota, Pamela; Hill, Mary D; Hanssen, Carl; Lawrenz, Frances; Reyes, Michele

    2008-10-01

    The Centers for Disease Control and Prevention developed the Swift Worksite Assessment and Translation (SWAT) evaluation method to identify promising practices in worksite health promotion programs. The new method complements research studies and evaluation studies of evidence-based practices that promote healthy weight in working adults. We used nationally recognized program evaluation standards of utility, feasibility, accuracy, and propriety as the foundation for our 5-step method: 1) site identification and selection, 2) site visit, 3) post-visit evaluation of promising practices, 4) evaluation capacity building, and 5) translation and dissemination. An independent, outside evaluation team conducted process and summative evaluations of SWAT to determine its efficacy in providing accurate, useful information and its compliance with evaluation standards. The SWAT evaluation approach is feasible in small and medium-sized workplace settings. The independent evaluation team judged SWAT favorably as an evaluation method, noting among its strengths its systematic and detailed procedures and service orientation. Experts in worksite health promotion evaluation concluded that the data obtained by using this evaluation method were sufficient to allow them to make judgments about promising practices. SWAT is a useful, business-friendly approach to systematic, yet rapid, evaluation that comports with program evaluation standards. The method provides a new tool to obtain practice-based evidence of worksite health promotion programs that help prevent obesity and, more broadly, may advance public health goals for chronic disease prevention and health promotion.

  11. Consider the source: persuasion of implicit evaluations is moderated by source credibility.

    PubMed

    Smith, Colin Tucker; De Houwer, Jan; Nosek, Brian A

    2013-02-01

    The long history of persuasion research shows how to change explicit, self-reported evaluations through direct appeals. At the same time, research on how to change implicit evaluations has focused almost entirely on techniques of retraining existing evaluations or manipulating contexts. In five studies, we examined whether direct appeals can change implicit evaluations in the same way as they do explicit evaluations. In five studies, both explicit and implicit evaluations showed greater evidence of persuasion following information presented by a highly credible source than a source low in credibility. Whereas cognitive load did not alter the effect of source credibility on explicit evaluations, source credibility had an effect on the persuasion of implicit evaluations only when participants were encouraged and able to consider information about the source. Our findings reveal the relevance of persuasion research for changing implicit evaluations and provide new ideas about the processes underlying both types of evaluation.

  12. Evaluative judgments are based on evaluative information: Evidence against meaning change in evaluative context effects.

    PubMed

    Kaplan, M F

    1975-07-01

    Trait adjectives commonly employed in person perception studies have both evaluative and denotative meanings. Evaluative ratings of single traits shift with variations in the context of other traits ascribed to the stimulus person; the extent to which denotative changes underlie these evaluative context effects has been a theoretical controversy. In the first experiment, it was shown that context effects on quantitative ratings of denotation can be largely accounted for by evaluative halo effects. In the second experiment, increasing the denotative relatedness of context traits to the test trait didnot increase the effect of the context. Only the evaluative meaning of the context affected evaluation of the rated test trait. These studies suggest that the denotative relationship between a test adjective and its context has little influence on context effects in person perception, and that denotative meaning changes do not mediate context effects. Instead, evaluative judgments appear to be based on evaluative meaning.

  13. Factors affecting evaluation culture within a non-formal educational organization.

    PubMed

    Vengrin, Courtney; Westfall-Rudd, Donna; Archibald, Thomas; Rudd, Rick; Singh, Kusum

    2018-08-01

    While research has been done on many aspects of evaluation within a variety of contexts and organizations, there is a lack of research surrounding the culture of evaluation. This study set out to examine this evaluative culture in one of the world's largest non-formal educational organizations through the use of an online survey and quantitative methodology. A path model was developed to examine the factors affecting evaluation culture. Results show perception regarding evaluation, program area, college major, location, training in evaluation, degree level, and years of experience explained 28% of the variance within evaluation culture. Results also found that the culture of evaluation is greatly impacted by leadership. By taking a closer look at the evaluation culture of a large non-formal educational organization, much can be learned about how to better develop and support evaluative work in other similar organizations and programs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. A Design Taxonomy Utilizing Ten Major Evaluation Strategies.

    ERIC Educational Resources Information Center

    Willis, Barry

    This paper discusses ten evaluation strategies selected on the basis of their general acceptance and their relatively unique approach to the field: (1) State, "Countenance of Evaluation"; (2) Stufflebeam, "Decision Centered Evaluation (CIPP)"; (3) Provus, "Discrepancy Evaluation"; (4) Scriven, "Goal Free Evaluation"; (5) Scriven, "Formative and…

  15. Evaluation as Empowerment and the Evaluator as Enabler.

    ERIC Educational Resources Information Center

    Whitmore, Elizabeth

    One rationale for implementing a particular evaluation approach is the empowerment of stakeholders. Evaluation as empowerment and possible links between empowerment and increased utilization of evaluation results are explored. Evaluation as empowerment assumes that individuals need to be personally productive and responsible in coping with their…

  16. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2011-07-01 2011-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  17. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2012-07-01 2012-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  18. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2014-07-01 2014-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  19. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2013-07-01 2013-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  20. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  1. Influences on Evaluation Quality

    ERIC Educational Resources Information Center

    Cooksy, Leslie J.; Mark, Melvin M.

    2012-01-01

    Attention to evaluation quality is commonplace, even if sometimes implicit. Drawing on her 2010 Presidential Address to the American Evaluation Association, Leslie Cooksy suggests that evaluation quality depends, at least in part, on the intersection of three factors: (a) evaluator competency, (b) aspects of the evaluation environment or context,…

  2. Objective and automated protocols for the evaluation of biomedical search engines using No Title Evaluation protocols.

    PubMed

    Campagne, Fabien

    2008-02-29

    The evaluation of information retrieval techniques has traditionally relied on human judges to determine which documents are relevant to a query and which are not. This protocol is used in the Text Retrieval Evaluation Conference (TREC), organized annually for the past 15 years, to support the unbiased evaluation of novel information retrieval approaches. The TREC Genomics Track has recently been introduced to measure the performance of information retrieval for biomedical applications. We describe two protocols for evaluating biomedical information retrieval techniques without human relevance judgments. We call these protocols No Title Evaluation (NT Evaluation). The first protocol measures performance for focused searches, where only one relevant document exists for each query. The second protocol measures performance for queries expected to have potentially many relevant documents per query (high-recall searches). Both protocols take advantage of the clear separation of titles and abstracts found in Medline. We compare the performance obtained with these evaluation protocols to results obtained by reusing the relevance judgments produced in the 2004 and 2005 TREC Genomics Track and observe significant correlations between performance rankings generated by our approach and TREC. Spearman's correlation coefficients in the range of 0.79-0.92 are observed comparing bpref measured with NT Evaluation or with TREC evaluations. For comparison, coefficients in the range 0.86-0.94 can be observed when evaluating the same set of methods with data from two independent TREC Genomics Track evaluations. We discuss the advantages of NT Evaluation over the TRels and the data fusion evaluation protocols introduced recently. Our results suggest that the NT Evaluation protocols described here could be used to optimize some search engine parameters before human evaluation. Further research is needed to determine if NT Evaluation or variants of these protocols can fully substitute for human evaluations.

  3. Self-evaluated and Close Relative-Evaluated Epworth Sleepiness Scale vs. Multiple Sleep Latency Test in Patients with Obstructive Sleep Apnea

    PubMed Central

    Li, Yun; Zhang, Jihui; Lei, Fei; Liu, Hong; Li, Zhe; Tang, Xiangdong

    2014-01-01

    Objectives: The aims of this study were to determine (1) the agreement in Epworth Sleepiness Scale (ESS) evaluated by patients and their close relatives (CRs), and (2) the correlation of objective sleepiness as measured by multiple sleep latency test (MSLT) with self-evaluated and close relative-evaluated ESS. Methods: A total of 85 consecutive patients with obstructive sleep apnea (OSA) (70 males, age 46.7 ± 12.9 years old) with an apnea-hypopnea index (AHI) > 5 events per hour (mean 38.9 ± 26.8/h) were recruited into this study. All participants underwent an overnight polysomnographic assessment (PSG), MSLT, and ESS rated by both patients and their CRs. Mean sleep latency < 8 min on MSLT was considered objective daytime sleepiness. Results: Self-evaluated global ESS score (ESSG) was closely correlated with evaluation by CRs (r = 0.79, p < 0.001); the mean ESSG score evaluated by patients did not significantly differ from that evaluated by CRs (p > 0.05). However, Bland- Altman plot showed individual differences between self-evaluated and CR-evaluated ESS scores, with a 95%CI of -9.3 to 7.0. The mean sleep latency on MSLT was significantly associated with CR-evaluated ESSG (r = -0.23, p < 0.05); significance of association with self-evaluated ESSG was marginal (r = -0.21, p = 0.05). Conclusions: CR-evaluated ESS has a good correlation but also significant individual disagreement with self-evaluated ESS in Chinese patients with OSA. CR-evaluated ESS performs as well as, if not better than, self-evaluated ESS in this population when referring to MSLT. Citation: Li Y; Zhang J; Lei F; Liu H; Li Z; Tang X. Self-evaluated and close relative-evaluated Epworth Sleepiness Scale vs. multiple sleep latency test in patients with obstructive sleep apnea. J Clin Sleep Med 2014;10(2):171-176. PMID:24533000

  4. Evaluating Health Information Systems Using Ontologies

    PubMed Central

    Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-01-01

    Background There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. Objectives The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems—whether similar or heterogeneous—by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. Methods On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. Results The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. Conclusions The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems. PMID:27311735

  5. Evaluating Health Information Systems Using Ontologies.

    PubMed

    Eivazzadeh, Shahryar; Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-06-16

    There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems-whether similar or heterogeneous-by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union countries. The relevance of the evaluation aspects created by the UVON method for the FI-STAR project was validated by the corresponding stakeholders of each case. These evaluation aspects were extracted from a UVON-generated ontology structure that reflects both the internally declared required quality attributes in the 7 eHealth applications of the FI-STAR project and the evaluation aspects recommended by the Model for ASsessment of Telemedicine applications (MAST) evaluation framework. The extracted evaluation aspects were used to create questionnaires (for the corresponding patients and health professionals) to evaluate each individual case and the whole of the FI-STAR project. The UVON method can provide a relevant set of evaluation aspects for a heterogeneous set of health information systems by organizing, unifying, and aggregating the quality attributes through ontological structures. Those quality attributes can be either suggested by evaluation models or elicited from the stakeholders of those systems in the form of system requirements. The method continues to be systematic, context sensitive, and relevant across a heterogeneous set of health information systems.

  6. Changing CS Features Alters Evaluative Responses in Evaluative Conditioning

    ERIC Educational Resources Information Center

    Unkelbach, Christian; Stahl, Christoph; Forderer, Sabine

    2012-01-01

    Evaluative conditioning (EC) refers to changes in people's evaluative responses toward initially neutral stimuli (CSs) by mere spatial and temporal contiguity with other positive or negative stimuli (USs). We investigate whether changing CS features from conditioning to evaluation also changes people's evaluative response toward these CSs. We used…

  7. Which Way Is Better for Teacher Evaluation? The Discourse on Teacher Evaluation in Taiwan

    ERIC Educational Resources Information Center

    Wang, Juei-Hsin; Chen, Yen-Ting

    2016-01-01

    There are no summative evaluations for compulsory and basic education in Taiwan. This research discusses and analyzes present teacher evaluation implementation. The implementation of policy nowadays means "Teacher evaluation for professional development". Teacher evaluation for professional development is a voluntary growing project of…

  8. Evaluation of Instructional Materials. Position Paper No. 1.

    ERIC Educational Resources Information Center

    Ward, Ted

    The position paper on the evaluation of instructional materials by the Michigan State University Regional Instructional Materials Center for Handicapped Children and Youth (IMC HCY) examines the professional and ethical dilemmas of evaluation and presents evaluation policies of the center. Evaluated by a roster of field evaluators throughout the…

  9. Evaluating the Impact of HRD.

    ERIC Educational Resources Information Center

    1998

    This document contains four papers from a symposium on evaluating the impact of human resource development (HRD). "The Politics of Program Evaluation and the Misuse of Evaluation Findings" (Hallie Preskill, Robin Lackey) discusses the status of evaluation theory, evaluation as a political activity, and the findings from a survey on the…

  10. Evaluator Training: Content and Topic Valuation in University Evaluation Courses

    ERIC Educational Resources Information Center

    Davies, Randall; MacKay, Kathryn

    2014-01-01

    Quality training opportunities for evaluators will always be important to the evaluation profession. While studies have documented the number of university programs providing evaluation training, additional information is needed concerning what content is being taught in current evaluation courses. This article summarizes the findings of a survey…

  11. Evaluation Thesaurus. Second Edition.

    ERIC Educational Resources Information Center

    Scriven, Michael

    This thesaurus to the evaluation field is not restricted to educational evaluation or to program evaluation, but also refers to product, personnel, and proposal evaluation, as well as to quality control, the grading of work samples, and to all the other areas in which disciplined evaluation is practiced. It contains many suggestions, procedures,…

  12. University Evaluations and Different Evaluation Approaches: A Finnish Perspective

    ERIC Educational Resources Information Center

    Liuhanen, Anna-Maija

    2005-01-01

    Evaluation of higher education can be described a species of its own with only few connections with other fields of evaluation. When considering the future developments in higher education evaluation (quality assurance), it is useful to observe its similarities and differences with various evaluation approaches in other than higher education…

  13. Evaluating Motor and Perceptual-Motor Development: Evaluating the Psychomotor Functioning of Infants and Young Children.

    ERIC Educational Resources Information Center

    Cooper, Walter E.

    The author considers the importance of evaluating preschoolers' perceptual motor development, the usefulness of various evaluation techniques, and the specific psychomotor abilities that require evaluation. He quotes researchers to underline the difficulty of choosing appropriate evaluative techniques and to stress the importance of taking…

  14. Evaluating Computer-Based Assessment in a Risk-Based Model

    ERIC Educational Resources Information Center

    Zakrzewski, Stan; Steven, Christine; Ricketts, Chris

    2009-01-01

    There are three purposes for evaluation: evaluation for action to aid the decision making process, evaluation for understanding to further enhance enlightenment and evaluation for control to ensure compliance to standards. This article argues that the primary function of evaluation in the "Catherine Wheel" computer-based assessment (CBA)…

  15. Foundations of Reporting: Or Bartlett's Guide to Evaluation Communication.

    ERIC Educational Resources Information Center

    Holley, Freda M.

    The author believes her ideas on evaluation reporting are old ideas in various fields including communication theory, advertising, social science, and learning theory. The human factor in reporting evaluation must be considered. Those being evaluated often feel threatened by the evaluation. Evaluators need to accept the behaviors of evaluation…

  16. Evaluation: The Process of Stimulating, Aiding, and Abetting Insightful Action.

    ERIC Educational Resources Information Center

    Guba, Egon G.; Stufflebeam, Daniel L.

    Part 1 of this monograph discusses the status of educational evaluation and describes several problems in carrying out such evaluation: (1) defining the educational setting, (2) defining decision types, (3) designing educational evaluation, (4) designing evaluation systems, and (5) defining criteria for judging evaluation. Part 2 proposes an…

  17. 25 CFR 1000.355 - How are trust evaluations conducted?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false How are trust evaluations conducted? 1000.355 Section... EDUCATION ACT Trust Evaluation Review Annual Trust Evaluations § 1000.355 How are trust evaluations conducted? (a) Each year the Secretary's designated representative(s) will conduct trust evaluations for...

  18. 38 CFR 21.8030 - Requirement for evaluation of child.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... evaluation of child. 21.8030 Section 21.8030 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... Certain Children of Vietnam Veterans-Spina Bifida and Covered Birth Defects Evaluation § 21.8030 Requirement for evaluation of child. (a) Children to be evaluated. The VR&E Division will evaluate each child...

  19. Challenges and Opportunities for Evaluating Environmental Education Programs

    ERIC Educational Resources Information Center

    Carleton-Hug, Annelise; Hug, J. William

    2010-01-01

    Environmental education organizations can do more to either institute evaluation or improve the quality of their evaluation. In an effort to help evaluators bridge the gap between the potential for high quality evaluation systems to improve environmental education, and the low level of evaluation in actual practice, we reviewed recent…

  20. An Empirical Examination of Validity in Evaluation

    ERIC Educational Resources Information Center

    Peck, Laura R.; Kim, Yushim; Lucio, Joanna

    2012-01-01

    This study addresses validity issues in evaluation that stem from Ernest R. House's book, "Evaluating With Validity". The authors examine "American Journal of Evaluation" articles from 1980 to 2010 that report the results of policy and program evaluations. The authors classify these evaluations according to House's "major approaches" typology…

  1. Cross-Continental Reflections on Evaluation Practice: Methods, Use, and Valuing

    ERIC Educational Resources Information Center

    Kallemeyn, Leanne M.; Hall, Jori; Friche, Nanna; McReynolds, Clifton

    2015-01-01

    The evaluation theory tree typology reflects the following three components of evaluation practice: (a) methods, (b) use, and (c) valuing. The purpose of this study was to explore how evaluation practice is conceived as reflected in articles published in the "American Journal of Evaluation" ("AJE") and "Evaluation," a…

  2. Participatory Evaluation as Seen in a Vygotskian Framework

    ERIC Educational Resources Information Center

    Higa, Terry Ann F.; Brandon, Paul R.

    2008-01-01

    In participatory evaluations of K-12 programs, evaluators develop school faculty's and administrators' evaluation capacity by training them to conduct evaluation tasks and providing consultation while the tasks are conducted. A strong case can be made that the capacity building in these evaluations can be examined using a Vygotskian approach. We…

  3. Reflections and Future Prospects for Evaluation in Human Resource Development

    ERIC Educational Resources Information Center

    Han, Heeyoung; Boulay, David

    2013-01-01

    Human resource development (HRD) evaluation has often been criticized for its limited function in organizational decision making. This article reviews evaluation studies to uncover the current status of HRD evaluation literature. The authors further discuss general evaluation theories in terms of value, use, and evaluator role to extend the…

  4. Special Education Program Evaluation: A Planning Guide. An Overview. CASE Commissioned Series.

    ERIC Educational Resources Information Center

    McLaughlin, John A.

    This resource guide is intended to help in planning special education program evaluations. It focuses on: basic evaluation concepts, identification of special education decision makers and their information needs, specific evaluation questions, procedures for gathering relevant information, and evaluation of the evaluation process itself.…

  5. Effectiveness of the Marine Corps’ Junior Enlisted Performance Evaluation System: An Evaluation of Proficiency and Conduct Marks

    DTIC Science & Technology

    2017-03-01

    THE MARINE CORPS’ JUNIOR ENLISTED PERFORMANCE EVALUATION SYSTEM: AN EVALUATION OF PROFICIENCY AND CONDUCT MARKS by Richard B. Larger Jr...CORPS’ JUNIOR ENLISTED PERFORMANCE EVALUATION SYSTEM: AN EVALUATION OF PROFICIENCY AND CONDUCT MARKS 5. FUNDING NUMBERS 6. AUTHOR(S) Richard B...in order to improve interpretability and minimize redundancies. 14. SUBJECT TERMS performance evaluation , proficiency marks, conduct marks

  6. Rural Principals and the North Carolina Teacher Evaluation Process: How Has the Transition from the TPAI-R to the New Evaluation Process Changed Principals' Evaluative Practices?

    ERIC Educational Resources Information Center

    Fuller, Charles Avery

    2016-01-01

    Beginning with the 2010-2011 school year the North Carolina State Board of Education (SBE) mandated the use of the North Carolina Teacher Evaluation Process (Evaluation Process) for use in all public school systems in the state to conduct teacher observations and evaluations. The Evaluation Process replaced the Teacher Performance Appraisal…

  7. Evaluation Planning, Evaluation Management, and Utilization of Evaluation Results within Adult Literacy Campaigns, Programs and Projects (with Implications for Adult Basic Education and Nonformal Education Programs in General). A Working Paper.

    ERIC Educational Resources Information Center

    Bhola, H. S.

    Addressed to professionals involved in program evaluation, this working paper covers various aspects of evaluation planning, including the following: planning as a sociotechnical process, steps in evaluation planning, program planning and implementation versus evaluation planning and implementation, the literacy system and its subsystems, and some…

  8. An evaluability assessment of a West Africa based Non-Governmental Organization's (NGO) progressive evaluation strategy.

    PubMed

    D'Ostie-Racine, Léna; Dagenais, Christian; Ridde, Valéry

    2013-02-01

    While program evaluations are increasingly valued by international organizations to inform practices and public policies, actual evaluation use (EU) in such contexts is inconsistent. Moreover, empirical literature on EU in the context of humanitarian Non-Governmental Organizations (NGOs) is very limited. The current article focuses on the evaluability assessment (EA) of a West-Africa based humanitarian NGO's progressive evaluation strategy. Since 2007, the NGO has established an evaluation strategy to inform its maternal and child health care user-fee exemption intervention. Using Wholey's (2004) framework, the current EA enabled us to clarify with the NGO's evaluation partners the intent of their evaluation strategy and to design its program logic model. The EA ascertained the plausibility of the evaluation strategy's objectives, the accessibility of relevant data, and the utility for intended users of evaluating both the evaluation strategy and the conditions that foster EU. Hence, key evaluability conditions for an EU study were assured. This article provides an example of EA procedures when such guidance is scant in the literature. It also offers an opportunity to analyze critically the use of EAs in the context of a humanitarian NGO's collaboration with evaluators and political actors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Interobserver reproducibility and accuracy of p16/Ki-67 dual-stain cytology in cervical cancer screening.

    PubMed

    Wentzensen, Nicolas; Fetterman, Barbara; Tokugawa, Diane; Schiffman, Mark; Castle, Philip E; Wood, Shannon N; Stiemerling, Eric; Poitras, Nancy; Lorey, Thomas; Kinney, Walter

    2014-12-01

    Dual-stain cytology for p16 and Ki-67 has been proposed as a biomarker in cervical cancer screening. The authors evaluated the reproducibility and accuracy of dual-stain cytology among 10 newly trained evaluators. In total, 480 p16/Ki-67-stained slides from human papillomavirus-positive women were evaluated in masked fashion by 10 evaluators. None of the evaluators had previous experience with p16 or p16/Ki-67 cytology. All participants underwent p16/Ki-67 training and subsequent proficiency testing. Reproducibility of dual-stain cytology was measured using the percentage agreement, individual and aggregate κ values, as well as McNemar statistics. Clinical performance for the detection of cervical intraepithelial neoplasia grade 2 or greater (CIN2+) was evaluated for each individual evaluator and for all evaluators combined compared with the reference evaluation by a cytotechnologist who had extensive experience with dual-stain cytology. The percentage agreement of individual evaluators with the reference evaluation ranged from 83% to 91%, and the κ values ranged from 0.65 to 0.81. The combined κ value was 0.71 for all evaluators and 0.73 for cytotechnologists. The average sensitivity and specificity for the detection of CIN2+ among novice evaluators was 82% and 64%, respectively; whereas the reference evaluation had 84% sensitivity and 63% specificity, respectively. Agreement on dual-stain positivity increased with greater numbers of p16/Ki-67-positive cells on the slides. Good to excellent reproducibility of p16/Ki-67 dual-stain cytology was observed with almost identical clinical performance of novice evaluators compared with reference evaluations. The current findings suggest that p16/Ki-67 dual-stain evaluation can be implemented in routine cytology practice with limited training. © 2014 American Cancer Society.

  10. Institutional design and utilization of evaluation: a contribution to a theory of evaluation influence based on Swiss experience.

    PubMed

    Balthasar, Andreas

    2009-06-01

    Growing interest in the institutionalization of evaluation in the public administration raises the question as to which institutional arrangement offers optimal conditions for the utilization of evaluations. Institutional arrangement denotes the formal organization of processes and competencies, together with procedural rules, that are applicable independently of individual evaluation projects. It reflects the evaluation practice of an institution and defines the distance between evaluators and evaluees. This article outlines the results of a broad-based study of all 300 or so evaluations that the Swiss Federal Administration completed from 1999 to 2002. On this basis, it derives a theory of the influence of institutional factors on the utilization of evaluations.

  11. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  12. Reliable and valid tools for measuring surgeons' teaching performance: residents' vs. self evaluation.

    PubMed

    Boerebach, Benjamin C M; Arah, Onyebuchi A; Busch, Olivier R C; Lombarts, Kiki M J M H

    2012-01-01

    In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We investigated (1) the reliability and validity of 2 tools for evaluating the teaching performance of attending surgeons in residency training programs, and (2) whether surgeons' self evaluation correlated with the residents' evaluation of those surgeons. We surveyed 343 surgeons and 320 residents as part of a multicenter prospective cohort study of faculty teaching performance in residency training programs. The reliability and validity of the SETQ (System for Evaluation Teaching Qualities) tools were studied using standard psychometric techniques. We then estimated the correlations between residents' and surgeons' evaluations. The response rate was 87% among surgeons and 84% among residents, yielding 2625 residents' evaluations and 302 self evaluations. The SETQ tools yielded reliable and valid data on 5 domains of surgical teaching performance, namely, learning climate, professional attitude towards residents, communication of goals, evaluation of residents, and feedback. The correlations between surgeons' self and residents' evaluations were low, with coefficients ranging from 0.03 for evaluation of residents to 0.18 for communication of goals. The SETQ tools for the evaluation of surgeons' teaching performance appear to yield reliable and valid data. The lack of strong correlations between surgeons' self and residents' evaluations suggest the need for using external feedback sources in informed self evaluation of surgeons. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  13. Using evaluation theory in priority setting and resource allocation.

    PubMed

    Smith, Neale; Mitton, Craig; Cornelissen, Evelyn; Gibson, Jennifer; Peacock, Stuart

    2012-01-01

    Public sector interest in methods for priority setting and program or policy evaluation has grown considerably over the last several decades, given increased expectations for accountable and efficient use of resources and emphasis on evidence-based decision making as a component of good management practice. While there has been some occasional effort to conduct evaluation of priority setting projects, the literatures around priority setting and evaluation have largely evolved separately. In this paper, the aim is to bring them together. The contention is that evaluation theory is a means by which evaluators reflect upon what it is they are doing when they do evaluation work. Theories help to organize thinking, sort out relevant from irrelevant information, provide transparent grounds for particular implementation choices, and can help resolve problematic issues which may arise in the conduct of an evaluation project. A detailed review of three major branches of evaluation theory--methods, utilization, and valuing--identifies how such theories can guide the development of efforts to evaluate priority setting and resource allocation initiatives. Evaluation theories differ in terms of their guiding question, anticipated setting or context, evaluation foci, perspective from which benefits are calculated, and typical methods endorsed. Choosing a particular theoretical approach will structure the way in which any priority setting process is evaluated. The paper suggests that explicitly considering evaluation theory makes key aspects of the evaluation process more visible to all stakeholders, and can assist in the design of effective evaluation of priority setting processes; this should iteratively serve to improve the understanding of priority setting practices themselves.

  14. Learning while evaluating: the use of an electronic evaluation portfolio in a geriatric medicine clerkship

    PubMed Central

    Duque, Gustavo; Finkelstein, Adam; Roberts, Ayanna; Tabatabai, Diana; Gold, Susan L; Winer, Laura R

    2006-01-01

    Background Electronic evaluation portfolios may play a role in learning and evaluation in clinical settings and may complement other traditional evaluation methods (bedside evaluations, written exams and tutor-led evaluations). Methods 133 third-year medical students used the McGill Electronic Evaluation Portfolio (MEEP) during their one-month clerkship rotation in Geriatric Medicine between September 2002 and September 2003. Students were divided into two groups, one who received an introductory hands-on session about the electronic evaluation portfolio and one who did not. Students' marks in their portfolios were compared between both groups. Additionally, students self-evaluated their performance and received feedback using the electronic portfolio during their mandatory clerkship rotation. Students were surveyed immediately after the rotation and at the end of the clerkship year. Tutors' opinions about this method were surveyed once. Finally, the number of evaluations/month was quantified. In all surveys, Likert scales were used and were analyzed using Chi-square tests and t-tests to assess significant differences in the responses from surveyed subjects. Results The introductory session had a significant effect on students' portfolio marks as well as on their comfort using the system. Both tutors and students reported positive notions about the method. Remarkably, an average (± SD) of 520 (± 70) evaluations/month was recorded with 30 (± 5) evaluations per student/month. Conclusion The MEEP showed a significant and positive effect on both students' self-evaluations and tutors' evaluations involving an important amount of self-reflection and feedback which may complement the more traditional evaluation methods. PMID:16409640

  15. The DLESE Evaluation Toolkit Project

    NASA Astrophysics Data System (ADS)

    Buhr, S. M.; Barker, L. J.; Marlino, M.

    2002-12-01

    The Evaluation Toolkit and Community project is a new Digital Library for Earth System Education (DLESE) collection designed to raise awareness of project evaluation within the geoscience education community, and to enable principal investigators, teachers, and evaluators to implement project evaluation more readily. This new resource is grounded in the needs of geoscience educators, and will provide a virtual home for a geoscience education evaluation community. The goals of the project are to 1) provide a robust collection of evaluation resources useful for Earth systems educators, 2) establish a forum and community for evaluation dialogue within DLESE, and 3) disseminate the resources through the DLESE infrastructure and through professional society workshops and proceedings. Collaboration and expertise in education, geoscience and evaluation are necessary if we are to conduct the best possible geoscience education. The Toolkit allows users to engage in evaluation at whichever level best suits their needs, get more evaluation professional development if desired, and access the expertise of other segments of the community. To date, a test web site has been built and populated, initial community feedback from the DLESE and broader community is being garnered, and we have begun to heighten awareness of geoscience education evaluation within our community. The web site contains features that allow users to access professional development about evaluation, search and find evaluation resources, submit resources, find or offer evaluation services, sign up for upcoming workshops, take the user survey, and submit calendar items. The evaluation resource matrix currently contains resources that have met our initial review. The resources are currently organized by type; they will become searchable on multiple dimensions of project type, audience, objectives and evaluation resource type as efforts to develop a collection-specific search engine mature. The peer review criteria and process for ensuring that the site contains robust and useful resources has been drafted and received initial feedback from the project advisory board, which consists of members of every segment of the target audience. The review criteria are based upon DLESE peer review criteria, the MERLOT digital library peer review criteria, digital resource evaluation criteria, and evaluation best practices. In geoscience education, as in most endeavors, improvements are made by asking questions and acting upon information about successes and failures; project evaluation can be thought of as the systematic process of asking these questions and gathering the right information. The Evaluation Toolkit seeks to help principal investigators, teachers, and evaluators use the evaluation process to improve our projects and our field.

  16. Development, evaluation, and utility of a peer evaluation form for online teaching.

    PubMed

    Gaskamp, Carol D; Kintner, Eileen

    2014-01-01

    Formative assessment of teaching by peers is an important component of quality improvement for educators. Teaching portfolios submitted for promotion and tenure are expected to include peer evaluations. Faculty resources designed for peer evaluation of classroom teaching are often inadequate for evaluating online teaching. The authors describe development, evaluation, and utility of a new peer evaluation form for formative assessment of online teaching deemed relevant, sound, feasible, and beneficial.

  17. Who Is Afraid of Evaluation? Ethics in Evaluation Research as a Way to Cope with Excessive Evaluation Anxiety: Insights from a Case Study

    ERIC Educational Resources Information Center

    Bechar, Shlomit; Mero-Jaffe, Irit

    2014-01-01

    In this paper we share our reflections, as evaluators, on an evaluation where we encountered Excessive Evaluation Anxiety (XEA). The signs of XEA which we discerned were particularly evident amongst the program head and staff who were part of a new training program. We present our insights on the evaluation process and its difficulties, as well as…

  18. Organizational Capacity to Do and Use Evaluation: Results of a Pan-Canadian Survey of Evaluators

    ERIC Educational Resources Information Center

    Cousins, J. Bradley; Elliott, Catherine; Amo, Courtney; Bourgeois, Isabelle; Chouinard, Jill; Goh, Swee C.; Lahey, Robert

    2008-01-01

    Despite increasing interest in the integration of evaluative inquiry into organizational functions and culture, the availability of empirical research addressing organizational capacity building to do and use evaluation is limited. This exploratory descriptive survey of internal evaluators in Canada asked about evaluation capacity building in the…

  19. Advancing Evaluation of Character Building Programs

    ERIC Educational Resources Information Center

    Urban, Jennifer Brown; Trochim, William M.

    2017-01-01

    This article presents how character development practitioners, researchers, and funders might think about evaluation, how evaluation fits into their work, and what needs to happen in order to sustain evaluative practices. A broader view of evaluation is presented whereby evaluation is not just seen as something that is applied at a program level,…

  20. Theory Building through Praxis Discourse: A Theory- and Practice-Informed Model of Transformative Participatory Evaluation

    ERIC Educational Resources Information Center

    Harnar, Michael A.

    2012-01-01

    Stakeholder participation in evaluation, where the evaluator engages stakeholders in the process, is prevalent in evaluation practice and is an important focus of evaluation research. Cousins and Whitmore proposed a bifurcation of participatory evaluation into the two streams of transformative participatory and practical participatory evaluation…

  1. The Software Line-up: What Reviewers Look for When Evaluating Software.

    ERIC Educational Resources Information Center

    ELECTRONIC Learning, 1982

    1982-01-01

    Contains a check list to aid teachers in evaluating software used in computer-assisted instruction on microcomputers. The evaluation form contains three sections: program description, program evaluation, and overall evaluation. A brief description of a software evaluation program in use at the Granite School District in Utah is included. (JJD)

  2. The Practice of Educational Evaluation: A View from the Inside.

    ERIC Educational Resources Information Center

    Jolly, S. Jean; Gramenz, Gary W.

    The literature on the practice of educational evaluation is reviewed, and internal and external evaluations in a school setting are compared. The paper states that the typical external evaluation is conducted as if school districts were rational organizations, a view which permeated 19th century evaluation activities. The internal evaluator, on…

  3. The Program Evaluation Standards Applied for Metaevaluation Purposes: Investigating Interrater Reliability and Implications for Use

    ERIC Educational Resources Information Center

    Wingate, Lori A.

    2009-01-01

    Metaevaluation is the evaluation of evaluation. Metaevaluation may focus particular evaluation cases, evaluation systems, or the discipline overall. Leading scholars within the discipline consider metaevaluation to be a professional imperative, demonstrating that evaluation is a reflexive enterprise. Various criteria have been set forth for what…

  4. Evaluation Blueprint for School-Wide Positive Behavior Support

    ERIC Educational Resources Information Center

    Algozzine, Bob; Horner, Robert H.; Sugai, George; Barrett, Susan; Dickey, Celeste Rossetto; Eber, Lucille; Kincaid, Donald; Lewis, Timothy; Tobin, Tary

    2010-01-01

    Evaluation is the process of collecting and using information for decision-making. A hallmark of School-wide Positive Behavior Support (SWPBS) is a commitment to formal evaluation. The purpose of this SWPBS Evaluation Blueprint is to provide those involved in developing Evaluation Plans and Evaluation Reports with a framework for (a) addressing…

  5. Informing Evaluation Capacity Building through Profiling Organizational Capacity for Evaluation: An Empirical Examination of Four Canadian Federal Government Organizations

    ERIC Educational Resources Information Center

    Bourgeois, Isabelle; Cousins, J. Bradley

    2008-01-01

    According to the literature published on the topic, the development of an organization's capacity to do and use evaluation typically follows four stages: "traditional evaluation," characterized by externally mandated evaluation activities; "awareness and experimentation," during which organizational members learn about evaluation and its benefits…

  6. Teachers' Views of the Impact of School Evaluation and External Inspection Processes

    ERIC Educational Resources Information Center

    Hopkins, Elizabeth; Hendry, Helen; Garrod, Frank; McClare, Siobhan; Pettit, Daniel; Smith, Luke; Burrell, Hannah; Temple, Jennifer

    2016-01-01

    The research explores the views of teachers about how their teaching is evaluated by others. The tensions between evaluations motivated by the drive to improve practice (school self-evaluation) and evaluation related to external accountability (external evaluation-inspection) are considered, linked to findings and ideas reported in the literature.…

  7. Using Program Theory-Driven Evaluation Science to Crack the Da Vinci Code

    ERIC Educational Resources Information Center

    Donaldson, Stewart I.

    2005-01-01

    Program theory-driven evaluation science uses substantive knowledge, as opposed to method proclivities, to guide program evaluations. It aspires to update, clarify, simplify, and make more accessible the evolving theory of evaluation practice commonly referred to as theory-driven or theory-based evaluation. The evaluator in this chapter provides a…

  8. Evaluating Evaluation Systems: Policy Levers and Strategies for Studying Implementation of Educator Evaluation. Policy Snapshot

    ERIC Educational Resources Information Center

    Matlach, Lauren

    2015-01-01

    Evaluation studies can provide feedback on implementation, support continuous improvement, and increase understanding of evaluation systems' impact on teaching and learning. Despite the importance of educator evaluation studies, states often need support to prioritize and fund them. Successful studies require expertise, time, and a shared…

  9. Attitude towards Continuous and Comprehensive Evaluation of High School Students

    ERIC Educational Resources Information Center

    Cyril, A. Vences; Jeyasekaran, D.

    2016-01-01

    Continuous and Comprehensive Evaluation (CCE) refers to a system of school-based evaluation introduced by CBSE in all CBSE affiliated schools across the country to evaluate both scholastic and non-scholastic aspects of students' growth and development. Continuous and comprehensive evaluation is to evaluate every aspect of the child during their…

  10. A Wireless Sensor Network-Based Portable Vehicle Detector Evaluation System

    PubMed Central

    Yoo, Seong-eun

    2013-01-01

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy. PMID:23344388

  11. A wireless sensor network-based portable vehicle detector evaluation system.

    PubMed

    Yoo, Seong-eun

    2013-01-17

    In an upcoming smart transportation environment, performance evaluations of existing Vehicle Detection Systems are crucial to maintain their accuracy. The existing evaluation method for Vehicle Detection Systems is based on a wired Vehicle Detection System reference and a video recorder, which must be operated and analyzed by capable traffic experts. However, this conventional evaluation system has many disadvantages. It is inconvenient to deploy, the evaluation takes a long time, and it lacks scalability and objectivity. To improve the evaluation procedure, this paper proposes a Portable Vehicle Detector Evaluation System based on wireless sensor networks. We describe both the architecture and design of a Vehicle Detector Evaluation System and the implementation results, focusing on the wireless sensor networks and methods for traffic information measurement. With the help of wireless sensor networks and automated analysis, our Vehicle Detector Evaluation System can evaluate a Vehicle Detection System conveniently and objectively. The extensive evaluations of our Vehicle Detector Evaluation System show that it can measure the traffic information such as volume counts and speed with over 98% accuracy.

  12. Research on the teaching evaluation reform of agricultural eco-environmental protection specialties under the background of deep integration of production and education

    NASA Astrophysics Data System (ADS)

    Ma, Guosheng

    2018-02-01

    With the implementation of the personnel training mode of deep integration between production and education, the original evaluation method cannot adapt to the goal of personnel training, so that the traditional teaching evaluation methods need to be reformed urgently. This paper studies and analyzes the four main problems in the teaching evaluation of agricultural eco-environmental protection specialties, and puts forward three measures to reform the teaching evaluation methods: establishing diversified evaluation indexes, establishing diversified evaluation subjects, and establishing diversified evaluation feedback mechanisms.

  13. Evaluating energy saving system of data centers based on AHP and fuzzy comprehensive evaluation model

    NASA Astrophysics Data System (ADS)

    Jiang, Yingni

    2018-03-01

    Due to the high energy consumption of communication, energy saving of data centers must be enforced. But the lack of evaluation mechanisms has restrained the process on energy saving construction of data centers. In this paper, energy saving evaluation index system of data centers was constructed on the basis of clarifying the influence factors. Based on the evaluation index system, analytical hierarchy process was used to determine the weights of the evaluation indexes. Subsequently, a three-grade fuzzy comprehensive evaluation model was constructed to evaluate the energy saving system of data centers.

  14. Empowerment evaluation: building communities of practice and a culture of learning.

    PubMed

    Fetterman, David M

    2002-02-01

    Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination. Program participants--including clients--conduct their own evaluations: an outside evaluator often serves as a coach or additional facilitator depending on internal program capabilities. Empowerment evaluation has three steps: 1) establishing a mission; 2) taking stock; and 3) planning for the future. These three steps build capacity. They also build a sense of community, often referred to as communities of practice. Empowerment evaluation also helps to create a culture of learning and evaluation within an organization or community.

  15. How to Modify (Implicit) Evaluations of Fear-Related Stimuli: Effects of Feature-Specific Attention Allocation

    PubMed Central

    Vanaelst, Jolien; Spruyt, Adriaan; De Houwer, Jan

    2016-01-01

    We demonstrate that feature-specific attention allocation influences the way in which repeated exposure modulates implicit and explicit evaluations toward fear-related stimuli. During an exposure procedure, participants were encouraged to assign selective attention either to the evaluative meaning (i.e., Evaluative Condition) or a non-evaluative, semantic feature (i.e., Semantic Condition) of fear-related stimuli. The influence of the exposure procedure was captured by means of a measure of implicit evaluation, explicit evaluative ratings, and a measure of automatic approach/avoidance tendencies. As predicted, the implicit measure of evaluation revealed a reduced expression of evaluations in the Semantic Condition as compared to the Evaluative Condition. Moreover, this effect generalized toward novel objects that were never presented during the exposure procedure. The explicit measure of evaluation mimicked this effect, although it failed to reach conventional levels of statistical significance. No effects were found in terms of automatic approach/avoidance tendencies. Potential implications for the treatment of anxiety disorders are discussed. PMID:27242626

  16. Electrophysiological responses to evaluative priming: the LPP is sensitive to incongruity.

    PubMed

    Herring, David R; Taylor, Jennifer H; White, Katherine R; Crites, Stephen L

    2011-08-01

    Previous studies examining event-related potentials and evaluative priming have been mixed; some find evidence that evaluative priming influences the N400, whereas others find evidence that it affects the late positive potential (LPP). Three experiments were conducted using either affective pictures (Experiments 1 and 2) or words (Experiment 3) in a sequential evaluative priming paradigm. In line with previous behavioral findings, participants responded slower to targets that were evaluatively incongruent with the preceding prime (e.g., negative preceded by positive) compared to evaluatively congruent targets (e.g., negative preceded by negative). In all three studies, the LPP was larger to evaluatively incongruent targets compared to evaluatively congruent ones, and there was no evidence that evaluative incongruity influenced the N400 component. Thus, the present results provide additional support for the notion that evaluative priming influences the LPP and not the N400. We discuss possible reasons for the inconsistent findings in prior research and the theoretical implications of the findings for both evaluative and semantic priming. 2011 APA, all rights reserved

  17. Affective Evaluations of Exercising: The Role of Automatic-Reflective Evaluation Discrepancy.

    PubMed

    Brand, Ralf; Antoniewicz, Franziska

    2016-12-01

    Sometimes our automatic evaluations do not correspond well with those we can reflect on and articulate. We present a novel approach to the assessment of automatic and reflective affective evaluations of exercising. Based on the assumptions of the associative-propositional processes in evaluation model, we measured participants' automatic evaluations of exercise and then shared this information with them, asked them to reflect on it and rate eventual discrepancy between their reflective evaluation and the assessment of their automatic evaluation. We found that mismatch between self-reported ideal exercise frequency and actual exercise frequency over the previous 14 weeks could be regressed on the discrepancy between a relatively negative automatic and a more positive reflective evaluation. This study illustrates the potential of a dual-process approach to the measurement of evaluative responses and suggests that mistrusting one's negative spontaneous reaction to exercise and asserting a very positive reflective evaluation instead leads to the adoption of inflated exercise goals.

  18. Collaborative evaluation of a high school prevention curriculum: How methods of collaborative evaluation enhanced a randomized control trial to inform program improvement.

    PubMed

    Orsini, Muhsin Michael; Wyrick, David L; Milroy, Jeffrey J

    2012-11-01

    Blending high-quality and rigorous research with pure evaluation practice can often be best accomplished through thoughtful collaboration. The evaluation of a high school drug prevention program (All Stars Senior) is an example of how perceived competing purposes and methodologies can coexist to investigate formative and summative outcome variables that can be used for program improvement. Throughout this project there were many examples of client learning from evaluator and evaluator learning from client. This article presents convincing evidence that collaborative evaluation can improve the design, implementation, and findings of the randomized control trial. Throughout this paper, we discuss many examples of good science, good evaluation, and other practical benefits of practicing collaborative evaluation. Ultimately, the authors created the term pre-formative evaluation to describe the period prior to data collection and before program implementation, when collaborative evaluation can inform program improvement. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Research on efficiency evaluation model of integrated energy system based on hybrid multi-attribute decision-making.

    PubMed

    Li, Yan

    2017-05-25

    The efficiency evaluation model of integrated energy system, involving many influencing factors, and the attribute values are heterogeneous and non-deterministic, usually cannot give specific numerical or accurate probability distribution characteristics, making the final evaluation result deviation. According to the characteristics of the integrated energy system, a hybrid multi-attribute decision-making model is constructed. The evaluation model considers the decision maker's risk preference. In the evaluation of the efficiency of the integrated energy system, the evaluation value of some evaluation indexes is linguistic value, or the evaluation value of the evaluation experts is not consistent. These reasons lead to ambiguity in the decision information, usually in the form of uncertain linguistic values and numerical interval values. In this paper, the risk preference of decision maker is considered when constructing the evaluation model. Interval-valued multiple-attribute decision-making method and fuzzy linguistic multiple-attribute decision-making model are proposed. Finally, the mathematical model of efficiency evaluation of integrated energy system is constructed.

  20. [Process and key points of clinical literature evaluation of post-marketing traditional Chinese medicine].

    PubMed

    Liu, Huan; Xie, Yanming

    2011-10-01

    The clinical literature evaluation of the post-marketing traditional Chinese medicine is a comprehensive evaluation by the comprehensive gain, analysis of the drug, literature of drug efficacy, safety, economy, based on the literature evidence and is part of the evaluation of evidence-based medicine. The literature evaluation in the post-marketing Chinese medicine clinical evaluation is in the foundation and the key position. Through the literature evaluation, it can fully grasp the information, grasp listed drug variety of traditional Chinese medicines second development orientation, make clear further clinical indications, perfect the medicines, etc. This paper discusses the main steps and emphasis of the clinical literature evaluation. Emphasizing security literature evaluation should attach importance to the security of a comprehensive collection drug information. Safety assessment should notice traditional Chinese medicine validity evaluation in improving syndrome, improveing the living quality of patients with special advantage. The economics literature evaluation should pay attention to reliability, sensitivity and practicability of the conclusion.

  1. How to Modify (Implicit) Evaluations of Fear-Related Stimuli: Effects of Feature-Specific Attention Allocation.

    PubMed

    Vanaelst, Jolien; Spruyt, Adriaan; De Houwer, Jan

    2016-01-01

    We demonstrate that feature-specific attention allocation influences the way in which repeated exposure modulates implicit and explicit evaluations toward fear-related stimuli. During an exposure procedure, participants were encouraged to assign selective attention either to the evaluative meaning (i.e., Evaluative Condition) or a non-evaluative, semantic feature (i.e., Semantic Condition) of fear-related stimuli. The influence of the exposure procedure was captured by means of a measure of implicit evaluation, explicit evaluative ratings, and a measure of automatic approach/avoidance tendencies. As predicted, the implicit measure of evaluation revealed a reduced expression of evaluations in the Semantic Condition as compared to the Evaluative Condition. Moreover, this effect generalized toward novel objects that were never presented during the exposure procedure. The explicit measure of evaluation mimicked this effect, although it failed to reach conventional levels of statistical significance. No effects were found in terms of automatic approach/avoidance tendencies. Potential implications for the treatment of anxiety disorders are discussed.

  2. The opportunities and challenges of multi-site evaluations: lessons from the jail diversion and trauma recovery national cross-site evaluation.

    PubMed

    Stainbrook, Kristin; Penney, Darby; Elwyn, Laura

    2015-06-01

    Multi-site evaluations, particularly of federally funded service programs, pose a special set of challenges for program evaluation. Not only are there contextual differences related to project location, there are often relatively few programmatic requirements, which results in variations in program models, target populations and services. The Jail Diversion and Trauma Recovery-Priority to Veterans (JDTR) National Cross-Site Evaluation was tasked with conducting a multi-site evaluation of thirteen grantee programs that varied along multiple domains. This article describes the use of a mixed methods evaluation design to understand the jail diversion programs and client outcomes for veterans with trauma, mental health and/or substance use problems. We discuss the challenges encountered in evaluating diverse programs, the benefits of the evaluation in the face of these challenges, and offer lessons learned for other evaluators undertaking this type of evaluation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. But do you think I’m cool? Developmental differences in striatal recruitment during direct and reflected social self-evaluations

    PubMed Central

    Jankowski, Kathryn F.; Moore, William E.; Merchant, Junaid S.; Kahn, Lauren E.; Pfeifer, Jennifer H.

    2015-01-01

    The current fMRI study investigated the neural foundations of evaluating oneself and others during early adolescence and young adulthood. Eighteen early adolescents (ages 11–14, M = 12.6) and 19 young adults (ages 22–31, M = 25.6) evaluated if academic, physical, and social traits described themselves directly (direct self-evaluations), described their best friend directly (direct other-evaluations), described themselves from their best friend’s perspective (reflected self-evaluations), or in general could change over time (control malleability-evaluations). Compared to control evaluations, both adolescents and adults recruited cortical midline structures during direct and reflected self-evaluations, as well as during direct other-evaluations, converging with previous research. However, unique to this study was a significant three-way interaction between age group, evaluative perspective, and domain within bilateral ventral striatum. Region of interest analyses demonstrated a significant evaluative perspective by domain interaction within the adolescent sample only. Adolescents recruited greatest bilateral ventral striatum during reflected social self-evaluations, which was positively correlated with age and pubertal development. These findings suggest that reflected social self-evaluations, made from the inferred perspective of a close peer, may be especially self-relevant, salient, or rewarding to adolescent self-processing – particularly during the progression through adolescence – and this feature persists into adulthood. PMID:24582805

  4. Feasibility and reliability of remote assessment of PALS psychomotor skills via interactive videoconferencing.

    PubMed

    Weeks, Douglas L; Molsberry, Dianne M

    2009-03-01

    This study determined inter-rater agreement between skill assessments provided by on-site PALS evaluators with ratings from evaluators at a remote site viewing the same skill performance over a videoconferencing network. Judgments about feasibility of remote evaluation were also obtained from the evaluators and PALS course participants. Two remote and two on-site instructors independently rated performance of 27 course participants who performed cardiac and shock/respiratory emergency core cases. Inter-rater reliability was assessed with the intraclass correlation coefficient (ICC). Feasibility was assessed with surveys of evaluators and course participants. Core cases were under the direction of the remote evaluators. The ICC for overall agreement on pass/fail decisions was 0.997 for the cardiac cases and 0.998 for the shock/respiratory cases. Perfect agreement was reached on 52 of 54 pass/fail decisions. Across all evaluators, all core cases, and all participants, 2584 ratings of individual skill criteria were provided, of which 21 (0.8%) were ratings in which a single evaluator disagreed with the other three evaluators. No trends emerged for location of the disagreeing evaluator. Survey responses indicated that remote evaluation was acceptable and feasible to course participants and to the evaluators. Videoconferencing technology was shown to provide adequate spatial and temporal resolution for PALS evaluators at-a-distance from course participants to agree with ratings of on-site evaluators.

  5. But do you think I'm cool? Developmental differences in striatal recruitment during direct and reflected social self-evaluations.

    PubMed

    Jankowski, Kathryn F; Moore, William E; Merchant, Junaid S; Kahn, Lauren E; Pfeifer, Jennifer H

    2014-04-01

    The current fMRI study investigates the neural foundations of evaluating oneself and others during early adolescence and young adulthood. Eighteen early adolescents (ages 11-14, M=12.6) and 19 young adults (ages 22-31, M=25.6) evaluated whether academic, physical, and social traits described themselves directly (direct self-evaluations), described their best friend directly (direct other-evaluations), described themselves from their best friend's perspective (reflected self-evaluations), or in general could change over time (control malleability-evaluations). Compared to control evaluations, both adolescents and adults recruited cortical midline structures during direct and reflected self-evaluations, as well as during direct other-evaluations, converging with previous research. However, unique to this study was a significant three-way interaction between age group, evaluative perspective, and domain within bilateral ventral striatum. Region of interest analyses demonstrated a significant evaluative perspective by domain interaction within the adolescent sample only. Adolescents recruited greatest bilateral ventral striatum during reflected social self-evaluations, which was positively correlated with age and pubertal development. These findings suggest that reflected social self-evaluations, made from the inferred perspective of a close peer, may be especially self-relevant, salient, or rewarding to adolescent self-processing--particularly during the progression through adolescence - and this feature persists into adulthood. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. 42 CFR 431.424 - Evaluation requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...

  7. 42 CFR 431.424 - Evaluation requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...

  8. 42 CFR 431.424 - Evaluation requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...

  9. A Basis for Determining the Adequacy of Evaluation Designs

    ERIC Educational Resources Information Center

    Sanders, James R.; Nafziger, Dean N.

    2011-01-01

    The purpose of this paper is to provide a basis for judging the adequacy of evaluation plans or, as they are commonly called, evaluation designs. The authors assume that using the procedures suggested in this paper to determine the adequacy of evaluation designs in advance of actually conducting evaluations will lead to better evaluation designs,…

  10. Evaluation Use: Results from a Survey of U.S. American Evaluation Association Members

    ERIC Educational Resources Information Center

    Fleischer, Dreolin N.; Christie, Christina A.

    2009-01-01

    This paper presents the results of a cross-sectional survey on evaluation use completed by 1,140 U.S. American Evaluation Association members. This study had three foci: evaluators' current attitudes, perceptions, and experiences related to evaluation use theory and practice, how these data are similar to those reported in a previous study…

  11. An Evaluation of Output Quality of Machine Translation (Padideh Software vs. Google Translate)

    ERIC Educational Resources Information Center

    Azer, Haniyeh Sadeghi; Aghayi, Mohammad Bagher

    2015-01-01

    This study aims to evaluate the translation quality of two machine translation systems in translating six different text-types, from English to Persian. The evaluation was based on criteria proposed by Van Slype (1979). The proposed model for evaluation is a black-box type, comparative and adequacy-oriented evaluation. To conduct the evaluation, a…

  12. When Unintended Consequences Become the Main Effect: Evaluating the Development of a Foster Parent Training Program.

    ERIC Educational Resources Information Center

    Loesch-Griffin, Deborah A.; Ringstaff, Cathy

    A program of education, training, and support provided to foster parents in a California county through a nonprofit agency is evaluated. The evaluators' experience indicates that: (1) evaluations are gaining in popularity; (2) role shifts by evaluators are sometimes difficult to perceive; (3) program staff are unlikely to use evaluative feedback…

  13. Tensions and Trade-Offs in Voluntary Involvement: Evaluating the Collaboratives for Excellence in Teacher Preparation

    ERIC Educational Resources Information Center

    Greenseid, Lija O.; Lawrenz, Frances

    2011-01-01

    A team at the University of Minnesota conducted the Collaboratives for Excellence in Teacher Preparation (CETP) core evaluation between 1999 and 2004. The purpose of the CETP core evaluation was to achieve consensus among CETP project leaders and project evaluators on evaluation questions; to develop, pilot, and field test evaluation instruments…

  14. The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2011-01-01

    The author sought to determine whether self-evaluation instruction had an impact on student self-evaluation, music performance, and self-evaluation accuracy of music performance among middle school instrumentalists. Participants (N = 211) were students at a private middle school located in a metropolitan area of a mid-Atlantic state. Students in…

  15. Assessing Vital Signs: Applying Two Participatory Evaluation Frameworks to the Evaluation of a College of Nursing

    ERIC Educational Resources Information Center

    Connors, Susan C.; Magilvy, Joan K.

    2011-01-01

    Evaluation research has been in progress to clarify the concept of participatory evaluation and to assess its impact. Recently, two theoretical frameworks have been offered--Daigneault and Jacob's participatory evaluation measurement index and Champagne and Smits' model of practical participatory evaluation. In this case report, we apply these…

  16. Linking Project Evaluation and Goals-Based Teacher Evaluation: Evaluating the Accelerated Schools Project in South Carolina.

    ERIC Educational Resources Information Center

    Finnan, Christine; Davis, Sara Calhoun

    This paper describes efforts to design an evaluation system that has as its primary objective helping schools effect positive change through the Accelerated Schools Project. Three characteristics were deemed essential: (1) that the evaluation be useful and meaningful; (2) that it be sensitive to local conditions; and (3) that evaluations of…

  17. A Strategy for Detection of Inconsistency in Evaluation of Essay Type Answers

    ERIC Educational Resources Information Center

    Shukla, Archana; Chaudhary, Banshi D.

    2014-01-01

    The quality of evaluation of essay type answer books involving multiple evaluators for courses with large number of enrollments is likely to be affected due to heterogeneity in experience, expertise and maturity of evaluators. In this paper, we present a strategy to detect anomalies in evaluation of essay type answers by multiple evaluators based…

  18. Program Evaluation of a Special Education Day School for Conduct Problem Adolescents.

    ERIC Educational Resources Information Center

    Maher, Charles A.

    1981-01-01

    Describes a procedure for program evaluation of a special education day school. The procedure enables a program evaluator to: (1) identify priority evaluation information needs of a school staff, (2) involve those persons in evaluation design and implementation, and (3) determine the utility of the evaluation for program decision-making purposes.…

  19. Using program evaluation to support knowledge translation in an interprofessional primary care team: a case study.

    PubMed

    Donnelly, Catherine; Shulha, Lyn; Klinger, Don; Letts, Lori

    2016-10-06

    Evaluation is a fundamental component in building quality primary care and is ideally situated to support individual, team and organizational learning by offering an accessible form of participatory inquiry. The evaluation literature has begun to recognize the unique features of KT evaluations and has described attributes to consider when evaluating KT activities. While both disciplines have focused on the evaluation of KT activities neither has explored the role of evaluation in KT. The purpose of the paper is to examine how participation in program evaluation can support KT in a primary care setting. A mixed methods case study design was used, where evaluation was conceptualized as a change process and intervention. A Memory Clinic at an interprofessional primary care clinic was the setting in which the study was conducted. An evaluation framework, Pathways of Influence provided the theoretical foundation to understand how program evaluation can facilitate the translation of knowledge at the level of the individual, inter-personal (Memory Clinic team) and the organization. Data collection included questionnaires, interviews, evaluation log and document analysis. Questionnaires and interviews were administered both before and after the evaluation: Pattern matching was used to analyze the data based on predetermined propositions. Individuals gained program knowledge that resulted in changes to both individual and program practices. One of the key themes was the importance clinicians placed on local, program based knowledge. The evaluation had less influence on the broader health organization. Program evaluation facilitated individual, team and organizational learning. The use of evaluation to support KT is ideally suited to a primary care setting by offering relevant and applicable knowledge to primary care team members while being sensitive to local context.

  20. Evaluating a federated medical search engine: tailoring the methodology and reporting the evaluation outcomes.

    PubMed

    Saparova, D; Belden, J; Williams, J; Richardson, B; Schuster, K

    2014-01-01

    Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants' expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement.

  1. Lessons learned about collaborative evaluation using the Capacity for Applying Project Evaluation (CAPE) framework with school and district leaders.

    PubMed

    Corn, Jenifer O; Byrom, Elizabeth; Knestis, Kirk; Matzen, Nita; Thrift, Beth

    2012-11-01

    Schools, districts, and state-level educational organizations are experiencing a great shift in the way they do the business of education. This shift focuses on accountability, specifically through the expectation of the effective utilization of evaluative-focused efforts to guide and support decisions about educational program implementation. In as much, education leaders need specific guidance and training on how to plan, implement, and use evaluation to critically examine district and school-level initiatives. One specific effort intended to address this need is through the Capacity for Applying Project Evaluation (CAPE) framework. The CAPE framework is composed of three crucial components: a collection of evaluation resources; a professional development model; and a conceptual framework that guides the work to support evaluation planning and implementation in schools and districts. School and district teams serve as active participants in the professional development and ultimately as formative evaluators of their own school or district-level programs by working collaboratively with evaluation experts. The CAPE framework involves the school and district staff in planning and implementing their evaluation. They are the ones deciding what evaluation questions to ask, which instruments to use, what data to collect, and how and to whom results should be reported. Initially this work is done through careful scaffolding by evaluation experts, where supports are slowly pulled away as the educators gain experience and confidence in their knowledge and skills as evaluators. Since CAPE engages all stakeholders in all stages of the evaluation, the philosophical intentions of these efforts to build capacity for formative evaluation strictly aligns with the collaborative evaluation approach. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Framework and criteria for program evaluation in the Office of Conservation and Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This study addresses the development of a framework and generic criteria for conducting program evaluation in the Office of Conservation and Renewable Energy. The evaluation process is intended to provide the Assistant Secretary with comprehensive and consistent evaluation data for management decisions regarding policy and strategy, crosscutting energy impacts and resource allocation and justification. The study defines evaluation objectives, identifies basic information requirements (criteria), and identifies a process for collecting evaluation results at the basic program level, integrating the results, and summarizing information upward through the CE organization to the Assistant Secretary. Methods are described by which initial criteria weremore » tested, analyzed, and refined for CE program applicability. General guidelines pertaining to evaluation and the Sunset Review requirements are examined and various types, designs, and models for evaluation are identified. Existing CE evaluation reports are reviewed and comments on their adequacy for meeting current needs are provided. An inventory and status survey of CE program evaluation activities is presented, as are issues, findings, and recommendations pertaining to CE evaluation and Sunset Review requirements. Also, sources of data for use in evaluation and the Sunset Review response are identified. An inventory of CE evaluation-related documents and reports is provided.« less

  3. Student perceptions of evaluation in undergraduate medical education: A qualitative study from one medical school.

    PubMed

    Schiekirka, Sarah; Reinhardt, Deborah; Heim, Susanne; Fabry, Götz; Pukrop, Tobias; Anders, Sven; Raupach, Tobias

    2012-06-22

    Evaluation is an integral part of medical education. Despite a wide use of various evaluation tools, little is known about student perceptions regarding the purpose and desired consequences of evaluation. Such knowledge is important to facilitate interpretation of evaluation results. The aims of this study were to elicit student views on the purpose of evaluation, indicators of teaching quality, evaluation tools and possible consequences drawn from evaluation data. This qualitative study involved 17 undergraduate medical students in Years 3 and 4 participating in 3 focus group interviews. Content analysis was conducted by two different researchers. Evaluation was viewed as a means to facilitate improvements within medical education. Teaching quality was believed to be dependent on content, process, teacher and student characteristics as well as learning outcome, with an emphasis on the latter. Students preferred online evaluations over paper-and-pencil forms and suggested circulating results among all faculty and students. Students strongly favoured the allocation of rewards and incentives for good teaching to individual teachers. In addition to assessing structural aspects of teaching, evaluation tools need to adequately address learning outcome. The use of reliable and valid evaluation methods is a prerequisite for resource allocation to individual teachers based on evaluation results.

  4. Self-Construal Priming Modulates Self-Evaluation under Social Threat

    PubMed Central

    Zhang, Tianyang; Xi, Sisi; Jin, Yan; Wu, Yanhong

    2017-01-01

    Previous studies have shown that Westerners evaluate themselves in an especially flattering way when faced with a social-evaluative threat. The current study first investigated whether East Asians also have a similar pattern by recruiting Chinese participants and using social-evaluative threat manipulations in which participants perform self-evaluation tasks while adopting different social-evaluative feedbacks (Experiment 1). Then further examined whether the different response patterns can be modulated by different types of self-construal by using social-evaluative threat manipulations in conjunction with a self-construal priming task (Experiment 2). The results showed that, as opposed to Westerners' pattern, Chinese participants rated themselves as having significantly greater above-average effect only when faced with the nonthreatening feedback but not the social-evaluative threat. More importantly, we found that self-construal modulated the self-evaluation under social-evaluative threat: following independent self-construal priming, participants tended to show a greater above-average effect when faced with a social-evaluative threat. However, this pattern in conjunction with a social threat disappeared after participants received interdependent self-construal priming or neutral priming. These findings suggest that the effects of social-evaluative threat on self-evaluation are not culturally universal and is strongly modulated by self-construal priming. PMID:29081755

  5. Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis.

    PubMed

    Chen, Huey T

    2016-12-01

    Theories of program and theories of evaluation form the foundation of program evaluation theories. Theories of program reflect assumptions on how to conceptualize an intervention program for evaluation purposes, while theories of evaluation reflect assumptions on how to design useful evaluation. These two types of theories are related, but often discussed separately. This paper attempts to use three theoretical perspectives (reductionism, systems thinking, and pragmatic synthesis) to interface them and discuss the implications for evaluation practice. Reductionism proposes that an intervention program can be broken into crucial components for rigorous analyses; systems thinking view an intervention program as dynamic and complex, requiring a holistic examination. In spite of their contributions, reductionism and systems thinking represent the extreme ends of a theoretical spectrum; many real-world programs, however, may fall in the middle. Pragmatic synthesis is being developed to serve these moderate- complexity programs. These three theoretical perspectives have their own strengths and challenges. Knowledge on these three perspectives and their evaluation implications can provide a better guide for designing fruitful evaluations, improving the quality of evaluation practice, informing potential areas for developing cutting-edge evaluation approaches, and contributing to advancing program evaluation toward a mature applied science. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Toward Best Practice in Evaluation: A Study of Australian Health Promotion Agencies.

    PubMed

    Francis, Louise J; Smith, Ben J

    2015-09-01

    Evaluation makes a critical contribution to the evidence base for health promotion programs and policy. Because there has been limited research about the characteristics and determinants of evaluation practice in this field, this study audited evaluations completed by health promotion agencies in Victoria, Australia, and explored the factors that enabled or hindered evaluation performance. Twenty-four agencies participated. A systematic assessment of 29 recent evaluation reports was undertaken, and in-depth interviews were carried out with 18 experienced practitioners. There was wide variability in the scope of evaluations and the level of reporting undertaken. Formative evaluation was uncommon, but almost all included process evaluation, especially of strategy reach and delivery. Impact evaluation was attempted in the majority of cases, but the designs and measures used were often not specified. Practitioners strongly endorsed the importance of evaluation, but the reporting requirements and inconsistent administrative procedures of the funding body were cited as significant barriers. Budget constraints, employment of untrained coworkers, and lack of access to measurement tools were other major barriers to evaluation. Capacity building to strengthen evaluation needs to encompass system, organizational, and practitioner-level action. This includes strengthening funding and reporting arrangements, fostering partnerships, and tailoring workforce development opportunities for practitioners. © 2015 Society for Public Health Education.

  7. Student perceptions of evaluation in undergraduate medical education: A qualitative study from one medical school

    PubMed Central

    2012-01-01

    Background Evaluation is an integral part of medical education. Despite a wide use of various evaluation tools, little is known about student perceptions regarding the purpose and desired consequences of evaluation. Such knowledge is important to facilitate interpretation of evaluation results. The aims of this study were to elicit student views on the purpose of evaluation, indicators of teaching quality, evaluation tools and possible consequences drawn from evaluation data. Methods This qualitative study involved 17 undergraduate medical students in Years 3 and 4 participating in 3 focus group interviews. Content analysis was conducted by two different researchers. Results Evaluation was viewed as a means to facilitate improvements within medical education. Teaching quality was believed to be dependent on content, process, teacher and student characteristics as well as learning outcome, with an emphasis on the latter. Students preferred online evaluations over paper-and-pencil forms and suggested circulating results among all faculty and students. Students strongly favoured the allocation of rewards and incentives for good teaching to individual teachers. Conclusions In addition to assessing structural aspects of teaching, evaluation tools need to adequately address learning outcome. The use of reliable and valid evaluation methods is a prerequisite for resource allocation to individual teachers based on evaluation results. PMID:22726271

  8. Relational responsibilities in responsive evaluation.

    PubMed

    Visse, Merel; Abma, Tineke A; Widdershoven, Guy A M

    2012-02-01

    This article explores how we can enhance our understanding of the moral responsibilities in daily, plural practices of responsive evaluation. It introduces an interpretive framework for understanding the moral aspects of evaluation practice. The framework supports responsive evaluators to better understand and handle their moral responsibilities. A case is introduced to illustrate our argument. Responsive evaluation contributes to the design and implementation of policy by working with stakeholders and coordinating the evaluation process as a relationally responsible practice. Responsive evaluation entails a democratic process in which the evaluator fosters and enters a partnership with stakeholders. The responsibilities of an evaluator generally involve issues such as 'confidentiality', 'accountability' and 'privacy'. The responsive evaluator has specific responsibilities, for example to include stakeholders and vulnerable groups and to foster an ongoing dialogue. In addition, responsive evaluation involves a relational responsibility, which becomes present in daily situations in which stakeholders express expectations and voice demands. In our everyday work as evaluators, it is difficult to respond to all these demands at the same time. In addition, this article demonstrates that novice evaluators experience challenges concerning over- and underidenfitication with stakeholders. Guidelines and quality criteria on how to act are helpful, but need interpretation and application to the unique situation at hand. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Reinventing Evaluation

    ERIC Educational Resources Information Center

    Hopson, Rodney K.

    2005-01-01

    This commentary reviews "Negotiating Researcher Roles in Ethnographic Program Evaluation" and discusses the changing field of evaluation. It situates postmodern deliberations in evaluation anthropology and ethnoevaluation, two concepts that explore the interdisciplinary merger in evaluation, ethnography, and anthropology. Reflecting on Hymes's…

  10. A new state evaluation method of oil pump unit based on AHP and FCE

    NASA Astrophysics Data System (ADS)

    Lin, Yang; Liang, Wei; Qiu, Zeyang; Zhang, Meng; Lu, Wenqing

    2017-05-01

    In order to make an accurate state evaluation of oil pump unit, a comprehensive evaluation index should be established. A multi-parameters state evaluation method of oil pump unit is proposed in this paper. The oil pump unit is analyzed by Failure Mode and Effect Analysis (FMEA), so evaluation index can be obtained based on FMEA conclusions. The weights of different parameters in evaluation index are discussed using Analytic Hierarchy Process (AHP) with expert experience. According to the evaluation index and the weight of each parameter, the state evaluation is carried out by Fuzzy Comprehensive Evaluation (FCE) and the state is divided into five levels depending on status value, which is inspired by human body health. In order to verify the effectiveness and feasibility of the proposed method, a state evaluation of oil pump used in a pump station is taken as an example.

  11. Warmth Trumps Competence in Evaluations of Both Ingroup and Outgroup.

    PubMed

    Hack, Tay; Goodwin, Stephanie A; Fiske, Susan T

    2013-09-01

    Research from a number of social psychological traditions suggests that social perceivers should be more concerned with evaluating others' intentions (i.e., warmth) relative to evaluating others' ability to act on those intentions (i.e., competence). The present research examined whether warmth evaluations have cognitive primacy over competence evaluations in a direct reaction-time comparison and whether the effect is moderated by ingroup versus outgroup membership. Participants evaluated as quickly as possible whether warmth versus competence traits described photographs of racial ingroup versus outgroup members expressing neutral emotions. Responses supported the hypothesis that evaluations of warmth take precedence over evaluations of competence; participants were faster to evaluate others on warmth-related traits compared to competence-related traits. Moreover, this primacy effect was not moderated by racial group membership. The data from this research speak to the robustness of the primacy of warmth in social evaluation.

  12. Evaluating Cross-Cutting Approaches to Chronic Disease Prevention and Management: Developing a Comprehensive Evaluation

    PubMed Central

    Jernigan, Jan; Barnes, Seraphine Pitt; Shea, Pat; Davis, Rachel; Rutledge, Stephanie

    2017-01-01

    We provide an overview of the comprehensive evaluation of State Public Health Actions to Prevent and Control Diabetes, Heart Disease, Obesity and Associated Risk Factors and Promote School Health (State Public Health Actions). State Public Health Actions is a program funded by the Centers for Disease Control and Prevention to support the statewide implementation of cross-cutting approaches to promote health and prevent and control chronic diseases. The evaluation addresses the relevance, quality, and impact of the program by using 4 components: a national evaluation, performance measures, state evaluations, and evaluation technical assistance to states. Challenges of the evaluation included assessing the extent to which the program contributed to changes in the outcomes of interest and the variability in the states’ capacity to conduct evaluations and track performance measures. Given the investment in implementing collaborative approaches at both the state and national level, achieving meaningful findings from the evaluation is critical. PMID:29215974

  13. Cybernetics: a possible solution for the "knowledge gap" between "external" and "internal" in evaluation processes.

    PubMed

    Levin-Rozalis, Miri

    2010-11-01

    This paper addresses the issue of the knowledge gap between evaluators and the entity being evaluated: the dilemma of the knowledge of professional evaluators vs. the in-depth knowledge of the evaluated subjects. In order to optimize evaluative outcomes, the author suggests an approach based on ideas borrowed from the science of cybernetics as a method of evaluation--one that enables in-depth perception of the evaluated field without jeopardizing a rigorous study or the evaluator's professionalism. The paper focuses on the main concepts that deal with this dilemma--showing how cybernetics combines the different bodies of knowledge of the different stakeholders, including the professional evaluator, resulting in a coherent body of knowledge created mainly by those internal to the process, owned by them, and relevant to all--those who are internal and those who are external and their different purposes. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  14. Visualizing context through theory deconstruction: a content analysis of three bodies of evaluation theory literature.

    PubMed

    Vo, Anne T

    2013-06-01

    While the evaluation field collectively agrees that contextual factors bear on evaluation practice and related scholarly endeavors, the discipline does not yet have an explicit framework for understanding evaluation context. To address this gap in the knowledge base, this paper explores the ways in which evaluation context has been addressed in the practical-participatory, values-engaged, and emergent realist evaluation literatures. Five primary dimensions that constitute evaluation context were identified for this purpose: (1) stakeholder; (2) program; (3) organization; (4) historical/political; and (5) evaluator. Journal articles, book chapters, and conference papers rooted in the selected evaluation approaches were compared along these dimensions in order to explore points of convergence and divergence in the theories. Study results suggest that the selected prescriptive theories most clearly explicate stakeholder and evaluator contexts. Programmatic, organizational, and historical/political contexts, on the other hand, require further clarification. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Warmth Trumps Competence in Evaluations of Both Ingroup and Outgroup

    PubMed Central

    Goodwin, Stephanie A.; Fiske, Susan T.

    2015-01-01

    Research from a number of social psychological traditions suggests that social perceivers should be more concerned with evaluating others’ intentions (i.e., warmth) relative to evaluating others’ ability to act on those intentions (i.e., competence). The present research examined whether warmth evaluations have cognitive primacy over competence evaluations in a direct reaction-time comparison and whether the effect is moderated by ingroup versus outgroup membership. Participants evaluated as quickly as possible whether warmth versus competence traits described photographs of racial ingroup versus outgroup members expressing neutral emotions. Responses supported the hypothesis that evaluations of warmth take precedence over evaluations of competence; participants were faster to evaluate others on warmth-related traits compared to competence-related traits. Moreover, this primacy effect was not moderated by racial group membership. The data from this research speak to the robustness of the primacy of warmth in social evaluation. PMID:26161263

  16. Training Select-in Interviewers for Astronaut Selection: A Program Evaluation

    NASA Technical Reports Server (NTRS)

    Hysong, S.; Galarza, L.; Holland, A.; Billica, Roger (Technical Monitor)

    2000-01-01

    Psychological factors critical to the success of short and long-duration missions have been identified in previous research; however, evaluation for such critical factors in astronaut applicants leaves much room for human interpretation. Thus, an evaluator training session was designed to standardize the interpretation of critical factors, as well as the structure of the select-in interview across evaluators. The purpose of this evaluative study was to determine the effectiveness of the evaluator training sessions and their potential impact on evaluator ratings.

  17. Evaluating evaluation forms form.

    PubMed

    Smith, Roger P

    2004-02-01

    To provide a tool for evaluating evaluation forms. A new form has been developed and tested on itself and a sample of evaluation forms obtained from the graduate medical education offices of several local universities. Additional forms from hospital administration were also subjected to analysis. The new form performed well when applied to itself. The form performed equally well when applied to the other (subject) forms, although their scores were embarrassingly poor. A new form for evaluating evaluation forms is needed, useful, and now available.

  18. Evaluating Training.

    ERIC Educational Resources Information Center

    Brethower, Karen S.; Rummler, Geary A.

    1979-01-01

    Presents general systems models (ballistic system, guided system, and adaptive system) and an evaluation matrix to help in examining training evaluation alternatives and in deciding what evaluation is appropriate. Includes some guidelines for conducting evaluation studies using four designs (control group, reversal, multiple baseline, and…

  19. 77 FR 35665 - Notice of Proposed Information Collection Requests; Office of Planning, Evaluation and Policy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-14

    ... evaluations may include both formative implementation and process evaluations that evaluate a program as it is unfolding, and summative descriptive evaluations that examine changes in final outcomes in a non-causal...

  20. THE ATMOSPHERIC MODEL EVALUATION TOOL

    EPA Science Inventory

    This poster describes a model evaluation tool that is currently being developed and applied for meteorological and air quality model evaluation. The poster outlines the framework and provides examples of statistical evaluations that can be performed with the model evaluation tool...

  1. Fuel Cell Transit Bus Coordination and Evaluation Plan California Fuel Cell Transit Evaluation Team, DRAFT

    DOT National Transportation Integrated Search

    2003-10-29

    The objective of the DOE/NREL evaluation program is to provide comprehensive, unbiased evaluation results of advanced technology vehicle development and operations, evaluation of hydrogen infrastructure development and operation, and descriptions of ...

  2. Cybernetics: A Possible Solution for the "Knowledge Gap" between "External" and "Internal" in Evaluation Processes

    ERIC Educational Resources Information Center

    Levin-Rozalis, Miri

    2010-01-01

    This paper addresses the issue of the knowledge gap between evaluators and the entity being evaluated: the dilemma of the knowledge of professional evaluators vs. the in-depth knowledge of the evaluated subjects. In order to optimize evaluative outcomes, the author suggests an approach based on ideas borrowed from the science of cybernetics as a…

  3. An Evaluation of TCITY: The Twin City Institute for Talented Youth. Report #1 in Evaluation Report Series.

    ERIC Educational Resources Information Center

    Stake, Robert E.; Gjerde, Craig

    This evaluation of the Twin City Institute for Talented Youth, a summer program for gifted students in grades 9 through 12, consists of two parts: a description of the program; and the evaluators' assessments, including advocate and adversary reports. Achievement tests were not used for evaluation. Evaluative comments follow each segment of the…

  4. Measuring Evaluation Fears in Adolescence: Psychometric Validation of the Portuguese Versions of the Fear of Positive Evaluation Scale and the Specific Fear of Negative Evaluation Scale

    ERIC Educational Resources Information Center

    Vagos, Paula; Salvador, Maria do Céu; Rijo, Daniel; Santos, Isabel M.; Weeks, Justin W.; Heimberg, Richard G.

    2016-01-01

    Modified measures of Fear of Negative Evaluation and Fear of Positive Evaluation were examined among Portuguese adolescents. These measures demonstrated replicable factor structure, internal consistency, and positive relationships with social anxiety and avoidance. Gender differences were found. Implications for evaluation and intervention are…

  5. Social Work and Evaluation: Why You Might Be Interested in the American Evaluation Association Social Work Topical Interest Group

    ERIC Educational Resources Information Center

    Wharton, Tracy C.; Kazi, Mansoor A.

    2012-01-01

    With increased pressure on programs to evaluate outcomes, the issue of evaluation in social work has never been so topical. In response to these pressures, there has been a growing interest in evidence-based practice and strategies for the evaluation of social work programs. The American Evaluation Association (AEA) is an international…

  6. Evaluative Research in Population Education: Manual Arising out of a Regional Training Workshop (Manila, May 20-31, 1985). Population Education Programme Service.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.

    This manual presents the very basics of monitoring, evaluation, and evaluative research as applied to population education. It is designed for beginners and is useful to project staff charged with the responsibility of monitoring, evaluation, and research. Chapter 1 discusses monitoring and evaluation. Chapter 2 examines evaluative research…

  7. Evaluation of NASA space grant consortia programs

    NASA Technical Reports Server (NTRS)

    Eisenberg, Martin A.

    1990-01-01

    The meaningful evaluation of the NASA Space Grant Consortium and Fellowship Programs must overcome unusual difficulties: (1) the program, in its infancy, is undergoing dynamic change; (2) the several state consortia and universities have widely divergent parochial goals that defy a uniform evaluative process; and (3) the pilot-sized consortium programs require that the evaluative process be economical in human costs less the process of evaluation comprise the effectiveness of the programs they are meant to assess. This paper represents an attempt to assess the context in which evaluation is to be conducted, the goals and limitations inherent to the evaluation, and to recommend appropriate guidelines for evaluation.

  8. Intelligent Evaluation Method of Tank Bottom Corrosion Status Based on Improved BP Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Qiu, Feng; Dai, Guang; Zhang, Ying

    According to the acoustic emission information and the appearance inspection information of tank bottom online testing, the external factors associated with tank bottom corrosion status are confirmed. Applying artificial neural network intelligent evaluation method, three tank bottom corrosion status evaluation models based on appearance inspection information, acoustic emission information, and online testing information are established. Comparing with the result of acoustic emission online testing through the evaluation of test sample, the accuracy of the evaluation model based on online testing information is 94 %. The evaluation model can evaluate tank bottom corrosion accurately and realize acoustic emission online testing intelligent evaluation of tank bottom.

  9. Designs and methods used in published Australian health promotion evaluations 1992-2011.

    PubMed

    Chambers, Alana Hulme; Murphy, Kylie; Kolbe, Anthony

    2015-06-01

    To describe the designs and methods used in published Australian health promotion evaluation articles between 1992 and 2011. Using a content analysis approach, we reviewed 157 articles to analyse patterns and trends in designs and methods in Australian health promotion evaluation articles. The purpose was to provide empirical evidence about the types of designs and methods used. The most common type of evaluation conducted was impact evaluation. Quantitative designs were used exclusively in more than half of the articles analysed. Almost half the evaluations utilised only one data collection method. Surveys were the most common data collection method used. Few articles referred explicitly to an intended evaluation outcome or benefit and references to published evaluation models or frameworks were rare. This is the first time Australian-published health promotion evaluation articles have been empirically investigated in relation to designs and methods. There appears to be little change in the purposes, overall designs and methods of published evaluations since 1992. More methodologically transparent and sophisticated published evaluation articles might be instructional, and even motivational, for improving evaluation practice and result in better public health interventions and outcomes. © 2015 Public Health Association of Australia.

  10. Building a community-based culture of evaluation.

    PubMed

    Janzen, Rich; Ochocka, Joanna; Turner, Leanne; Cook, Tabitha; Franklin, Michelle; Deichert, Debbie

    2017-12-01

    In this article we argue for a community-based approach as a means of promoting a culture of evaluation. We do this by linking two bodies of knowledge - the 70-year theoretical tradition of community-based research and the trans-discipline of program evaluation - that are seldom intersected within the evaluation capacity building literature. We use the three hallmarks of a community-based research approach (community-determined; equitable participation; action and change) as a conceptual lens to reflect on a case example of an evaluation capacity building program led by the Ontario Brian Institute. This program involved two community-based groups (Epilepsy Southwestern Ontarioand the South West Alzheimer Society Alliance) who were supported by evaluators from the Centre for Community Based Research to conduct their own internal evaluation. The article provides an overview of a community-based research approach and its link to evaluation. It then describes the featured evaluation capacity building initiative, including reflections by the participating organizations themselves. We end by discussing lessons learned and their implications for future evaluation capacity building. Our main argument is that organizations that strive towards a community-based approach to evaluation are well placed to build and sustain a culture of evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Lessons from the trenches: meeting evaluation challenges in school health education.

    PubMed

    Young, Michael; Denny, George; Donnelly, Joseph

    2012-11-01

    Those involved in school health education programs generally believe that health-education programs can play an important role in helping young people make positive health decisions. Thus, it is to document the effects of such programs through rigorous evaluations published in peer-reviewed journals. This paper helps the reader understand the context of school health program evaluation, examines several problems and challenges, shows how problems can often be fixed, or prevented, and demonstrates ways in which challenges can be met. A number of topics are addressed, including distinguishing between curricula evaluation and evaluation of outcomes, types of evaluation, identifying stakeholders in school health evaluation, selection of a program evaluator, recruiting participants, design issues, staff training, parental consent, instrumentation, program implementation and treatment fidelity, participant retention, data collection, data analysis and interpretation, presentation of results, and manuscript preparation and submission. Although there is a lack of health-education program evaluation, rigorous evaluations that have been conducted have, at least in some cases, led to wider dissemination of effective programs. These suggestions will help those interested in school health education understand the importance of evaluation and will provide important guidelines for those conducting evaluations of school health-education programs. © 2012, American School Health Association.

  12. Organizational determinants of evaluation practice in Australian prevention agencies.

    PubMed

    Schwarzman, J; Bauman, A; Gabbe, B; Rissel, C; Shilton, T; Smith, B J

    2018-06-01

    Program evaluation is essential to inform decision making, contribute to the evidence base for strategies, and facilitate learning in health promotion and disease prevention organizations. Theoretical frameworks of organizational learning, and studies of evaluation capacity building describe the organization as central to evaluation capacity. Australian prevention organizations recognize limitations to current evaluation effectiveness and are seeking guidance to build evaluation capacity. This qualitative study identifies organizational facilitators and barriers to evaluation practice, and explores their interactions in Australian prevention organizations. We conducted semi-structured interviews with 40 experienced practitioners from government and non-government organizations. Using thematic analysis, we identified seven key themes that influence evaluation practice: leadership, organizational culture, organizational systems and structures, partnerships, resources, workforce development and training and recruitment and skills mix. We found organizational determinants of evaluation to have multi-level interactions. Leadership and organizational culture influenced organizational systems, resource allocation and support of staff. Partnerships were important to overcome resource deficits, and systems were critical to embed evaluation within the organization. Organizational factors also influenced the opportunities for staff to develop skills and confidence. We argue that investment to improve these factors would allow organizations to address evaluation capacity at multiple levels, and ultimately facilitate effective evaluation practice.

  13. Evaluation utilization research--developing a theory and putting it to use.

    PubMed

    Neuman, Ari; Shahor, Neria; Shina, Ilan; Sarid, Anat; Saar, Zehava

    2013-02-01

    This article presents the findings of a two-stage study that had two key objectives: to develop a theory about evaluation utilization in an educational organization and to apply this theory to promote evaluation utilization within the organization. The first stage involved a theoretical conceptualization using a participatory method of concept mapping. This process identified the modes of evaluation utilization within the organization, produced a representation of the relationship between them and led to a theory. The second stage examined the practical implications of this conceptualization in terms of how different stakeholders in the organization perceive the actual and preferable state of evaluation utilization within the organization (i.e. to what extent is evaluation utilized and to what extent should it be utilized). The participatory process of the study promoted the evaluation utilization by involving stakeholders, thus giving them a sense of ownership and improving communication between the evaluation unit and the stakeholders. In addition, understanding the evaluation needs of the stakeholders in the organization helped generate relevant and realizable evaluation processes. On a practical level, the results are currently shaping the evaluation plan and the place of evaluations within the organization. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. [Analysis of evaluation process of research projects submitted to the Fondo de Investigación Sanitaria, Spain].

    PubMed

    Prieto Carles, C; Gómez-Gerique, J; Gutiérrez Millet, V; Veiga de Cabo, J; Sanz Martul, E; Mendoza Hernández, J L

    2000-10-07

    At the present time it seems very clear that research improvement is both an unquestionable fact and the right way to develop technological innovation, services and patents. However, such improvement and corresponding finances needs to be done under fine and rigorous evaluation process as an assessment tool under which all the research projects applying to a public or private call for proposals should be submitted to assure a coherence point according to the investment to be made. At this end, the main target of this work has been focused to analysis and study the evaluation process traditionally made by Fondo de Investigación Sanitaria (FIS) as well as to propose most adequate modifications. A sample of 431 research projects corresponding to year 1998 proposal was analysed. The evaluation from FIS and ANEP (National Evaluation and Prospective Agency) was evaluated and scored (evaluation quality) in its main contents by 3 independent evaluators, the showed results submitted to a comparative frame between these agencies at indoor (FIS) and outdoor (FIS/ANEP) level. FIS evaluation had 20 commissions or areas of knowledge. The analysis indoor (FIS) clearly showed that evaluation quality was correlated to the assigned commission (F = 3.71; p < 0.001) and to the time last of the researched proposal (F = 3.42; p < 0.05) but no related to the evaluator. On the other hand, the quality of ANEP evaluation showed a correlated dependency of the three mentioned facts. In all terms, the ANEP evaluation was better than FIS for the three years time projects, but in did not show significant differences in one or two years time projects. In all cases, the evaluation with final results as negative (financing denied) showed an average quality higher than positive evaluation. The obtained results advice about the convenience of making some changes in the evaluative structure and to review the sort of FIS technical commissions focusing an improvement of the evaluation process.

  15. Community outreach: from measuring the difference to making a difference with health information*

    PubMed Central

    Ottoson, Judith M.; Green, Lawrence W.

    2005-01-01

    Background: Community-based outreach seeks to move libraries beyond their traditional institutional boundaries to improve both access to and effectiveness of health information. The evaluation of such outreach needs to involve the community in assessing the program's process and outcomes. Purpose: Evaluation of community-based library outreach programs benefits from a participatory approach. To explain this premise of the paper, three components of evaluation theory are paired with relevant participatory strategies. Concepts: The first component of evaluation theory is also a standard of program evaluation: use. Evaluation is intended to be useful for stakeholders to make decisions. A useful evaluation is credible, timely, and of adequate scope. Participatory approaches to increase use of evaluation findings include engaging end users early in planning the program itself and in deciding on the outcomes of the evaluation. A second component of evaluation theory seeks to understand what is being evaluated, such as specific aspects of outreach programs. A transparent understanding of the ways outreach achieves intended goals, its activities and linkages, and the context in which it operates precedes any attempt to measure it. Participatory approaches to evaluating outreach include having end users, such as health practitioners in other community-based organizations, identify what components of the outreach program are most important to their work. A third component of evaluation theory is concerned with the process by which value is placed on outreach. What will count as outreach success or failure? Who decides? Participatory approaches to valuing include assuring end-user representation in the formulation of evaluation questions and in the interpretation of evaluation results. Conclusions: The evaluation of community-based outreach is a complex process that is not made easier by a participatory approach. Nevertheless, a participatory approach is more likely to make the evaluation findings useful, ensure that program knowledge is shared, and make outreach valuing transparent. PMID:16239958

  16. Quantitative Evaluation of Heavy Duty Machine Tools Remanufacturing Based on Modified Catastrophe Progression Method

    NASA Astrophysics Data System (ADS)

    shunhe, Li; jianhua, Rao; lin, Gui; weimin, Zhang; degang, Liu

    2017-11-01

    The result of remanufacturing evaluation is the basis for judging whether the heavy duty machine tool can remanufacture in the EOL stage of the machine tool lifecycle management.The objectivity and accuracy of evaluation is the key to the evaluation method.In this paper, the catastrophe progression method is introduced into the quantitative evaluation of heavy duty machine tools’ remanufacturing,and the results are modified by the comprehensive adjustment method,which makes the evaluation results accord with the standard of human conventional thinking.Using the catastrophe progression method to establish the heavy duty machine tools’ quantitative evaluation model,to evaluate the retired TK6916 type CNC floor milling-boring machine’s remanufacturing.The evaluation process is simple,high quantification,the result is objective.

  17. Can Principals Promote Teacher Development as Evaluators? A Case Study of Principals' Views and Experiences.

    PubMed

    Kraft, Matthew A; Gilmour, Allison

    2016-12-01

    New teacher evaluation systems have expanded the role of principals as instructional leaders, but little is known about principals' ability to promote teacher development through the evaluation process. We conducted a case study of principals' perspectives on evaluation and their experiences implementing observation and feedback cycles to better understand whether principals feel as though they are able to promote teacher development as evaluators. We conducted interviews with a stratified random sample of 24 principals in an urban district that recently implemented major reforms to its teacher evaluation system. We analyzed these interviews by drafting thematic summaries, coding interview transcripts, creating data-analytic matrices, and writing analytic memos. We found that the evaluation reforms provided a common framework and language that helped facilitate principals' feedback conversations with teachers. However, we also found that tasking principals with primary responsibility for conducting evaluations resulted in a variety of unintended consequences which undercut the quality of evaluation feedback they provided. We analyze five broad solutions to these challenges: strategically targeting evaluations, reducing operational responsibilities, providing principal training, hiring instructional coaches, and developing peer evaluation systems. The quality of feedback teachers receive through the evaluation process depends critically on the time and training evaluators have to provide individualized and actionable feedback. Districts that task principals with primary responsibility for conducting observation and feedback cycles must attend to the many implementation challenges associated with this approach in order for next-generation evaluation systems to successfully promote teacher development.

  18. Evaluation Framework for Telemedicine Using the Logical Framework Approach and a Fishbone Diagram

    PubMed Central

    2015-01-01

    Objectives Technological advances using telemedicine and telehealth are growing in healthcare fields, but the evaluation framework for them is inconsistent and limited. This paper suggests a comprehensive evaluation framework for telemedicine system implementation and will support related stakeholders' decision-making by promoting general understanding, and resolving arguments and controversies. Methods This study focused on developing a comprehensive evaluation framework by summarizing themes across the range of evaluation techniques and organized foundational evaluation frameworks generally applicable through studies and cases of diverse telemedicine. Evaluation factors related to aspects of information technology; the evaluation of satisfaction of service providers and consumers, cost, quality, and information security are organized using the fishbone diagram. Results It was not easy to develop a monitoring and evaluation framework for telemedicine since evaluation frameworks for telemedicine are very complex with many potential inputs, activities, outputs, outcomes, and stakeholders. A conceptual framework was developed that incorporates the key dimensions that need to be considered in the evaluation of telehealth implementation for a formal structured approach to the evaluation of a service. The suggested framework consists of six major dimensions and the subsequent branches for each dimension. Conclusions To implement telemedicine and telehealth services, stakeholders should make decisions based on sufficient evidence in quality and safety measured by the comprehensive evaluation framework. Further work would be valuable in applying more comprehensive evaluations to verify and improve the comprehensive framework across a variety of contexts with more factors and participant group dimensions. PMID:26618028

  19. Evaluation Framework for Telemedicine Using the Logical Framework Approach and a Fishbone Diagram.

    PubMed

    Chang, Hyejung

    2015-10-01

    Technological advances using telemedicine and telehealth are growing in healthcare fields, but the evaluation framework for them is inconsistent and limited. This paper suggests a comprehensive evaluation framework for telemedicine system implementation and will support related stakeholders' decision-making by promoting general understanding, and resolving arguments and controversies. This study focused on developing a comprehensive evaluation framework by summarizing themes across the range of evaluation techniques and organized foundational evaluation frameworks generally applicable through studies and cases of diverse telemedicine. Evaluation factors related to aspects of information technology; the evaluation of satisfaction of service providers and consumers, cost, quality, and information security are organized using the fishbone diagram. It was not easy to develop a monitoring and evaluation framework for telemedicine since evaluation frameworks for telemedicine are very complex with many potential inputs, activities, outputs, outcomes, and stakeholders. A conceptual framework was developed that incorporates the key dimensions that need to be considered in the evaluation of telehealth implementation for a formal structured approach to the evaluation of a service. The suggested framework consists of six major dimensions and the subsequent branches for each dimension. To implement telemedicine and telehealth services, stakeholders should make decisions based on sufficient evidence in quality and safety measured by the comprehensive evaluation framework. Further work would be valuable in applying more comprehensive evaluations to verify and improve the comprehensive framework across a variety of contexts with more factors and participant group dimensions.

  20. Conceptual framework for development of comprehensive e-health evaluation tool.

    PubMed

    Khoja, Shariq; Durrani, Hammad; Scott, Richard E; Sajwani, Afroz; Piryani, Usha

    2013-01-01

    The main objective of this study was to develop an e-health evaluation tool based on a conceptual framework including relevant theories for evaluating use of technology in health programs. This article presents the development of an evaluation framework for e-health programs. The study was divided into three stages: Stage 1 involved a detailed literature search of different theories and concepts on evaluation of e-health, Stage 2 plotted e-health theories to identify relevant themes, and Stage 3 developed a matrix of evaluation themes and stages of e-health programs. The framework identifies and defines different stages of e-health programs and then applies evaluation theories to each of these stages for development of the evaluation tool. This framework builds on existing theories of health and technology evaluation and presents a conceptual framework for developing an e-health evaluation tool to examine and measure different factors that play a definite role in the success of e-health programs. The framework on the horizontal axis divides e-health into different stages of program implementation, while the vertical axis identifies different themes and areas of consideration for e-health evaluation. The framework helps understand various aspects of e-health programs and their impact that require evaluation at different stages of the life cycle. The study led to the development of a new and comprehensive e-health evaluation tool, named the Khoja-Durrani-Scott Framework for e-Health Evaluation.

  1. Evaluation in Human Resource Development.

    ERIC Educational Resources Information Center

    1999

    These four papers are from a symposium on evaluation in human resource development (HRD). "Assessing Organizational Readiness for Learning through Evaluative Inquiry" (Hallie Preskill, Rosalie T. Torres) reviews how evaluative inquiry can facilitate organizational learning; argues HRD evaluation should be reconceptualized as a process…

  2. Evaluating Educational Programs.

    ERIC Educational Resources Information Center

    Ball, Samuel

    The activities of Educational Testing Service (ETS) in evaluating educational programs are described. Program evaluations are categorized as needs assessment, formative evaluation, or summative evaluation. Three classic efforts which illustrate the range of ETS' participation are the Pennsylvania Goals Study (1965), the Coleman Report--Equality of…

  3. Legislative Evaluation.

    ERIC Educational Resources Information Center

    Fox, Harrison

    The speaker discusses Congressional program evaluation. From the Congressional perspective, good evaluators understand the political, social, and economic processes; are familiar with various evaluation methods; and know how to use authority and power within their roles. Program evaluation serves three major purposes: to anticipate social impact…

  4. Learning Self-Evaluation: Challenges for Students.

    ERIC Educational Resources Information Center

    MacGregor, Jean

    1993-01-01

    Self-evaluation is unfamiliar to most college students. Teachers can use varied approaches to support students in overcoming unfamiliarity with self-evaluation, lack of confidence in describing learning, writing difficulties, evaluation difficulties, discomfort discussing academic problems, cultural bias against self-evaluation, emotional…

  5. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  6. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  7. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  8. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  9. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  10. Handbook for Improving Superintendent Performance Evaluation.

    ERIC Educational Resources Information Center

    Candoli, Carl; And Others

    This handbook for superintendent performance evaluation contains information for boards of education as they institute or improve their evaluation system. The handbook answers questions involved in operationalizing, implementing, and evaluating a superintendent-evaluation system. The information was developed from research on superintendent…

  11. 48 CFR 1336.602-2 - Evaluation boards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Evaluation boards. 1336...-2 Evaluation boards. Permanent and ad hoc architect-engineer evaluation boards may include... evaluation boards should be comprised of at least a majority of government personnel. ...

  12. Self Evaluation of Organizations.

    ERIC Educational Resources Information Center

    Pooley, Richard C.

    Evaluation within human service organizations is defined in terms of accepted evaluation criteria, with reasonable expectations shown and structured into a model of systematic evaluation practice. The evaluation criteria of program effort, performance, adequacy, efficiency and process mechanisms are discussed, along with measurement information…

  13. Evaluability Assessment: A Retrospective Illustration and Review.

    ERIC Educational Resources Information Center

    Smith, Nick L.

    1981-01-01

    Rutman's view of evaluability assessment is reviewed, evaluation planning activities are illustrated via flow diagram for a large educational evaluation designed to increase citizen participation in local school activities, and some of the limitations of Rutman's evaluability procedures are outlined. (RL)

  14. Presidential Address: Empowerment Evaluation.

    ERIC Educational Resources Information Center

    Fetterman, David

    1994-01-01

    Empowerment evaluation is the use of evaluation concepts and techniques to foster self-determination, focusing on helping people help themselves. This collaborative evaluation approach requires both qualitative and quantitative methodologies. It is a multifaceted approach that can be applied to evaluation in any area. (SLD)

  15. SOAR 89: Space Station. Space suit test program

    NASA Technical Reports Server (NTRS)

    Kosmo, Joseph J.; West, Philip; Rouen, Michael

    1990-01-01

    The elements of the test program for the space suit to be used on Space Station Freedom are noted in viewgraph form. Information is given on evaluation objectives, zero gravity evaluation, mobility evaluation, extravehicular activity task evaluation, and shoulder joint evaluation.

  16. What and How Are We Evaluating? Meta-Evaluation Study of the NASA Innovations in Climate Education (NICE) Portfolio

    NASA Astrophysics Data System (ADS)

    Martin, A. M.; Barnes, M. H.; Chambers, L. H.; Pippin, M. R.

    2011-12-01

    As part of NASA's Minority University Research and Education Program (MUREP), the NASA Innovations in Climate Education (NICE) project at Langley Research Center has funded 71 climate education initiatives since 2008. The funded initiatives span across the nation and contribute to the development of a climate-literate public and the preparation of a climate-related STEM workforce through research experiences, professional development opportunities, development of data access and modeling tools, and educational opportunities in both K-12 and higher education. Each of the funded projects proposes and carries out its own evaluation plan, in collaboration with external or internal evaluation experts. Using this portfolio as an exemplar case, NICE has undertaken a systematic meta-evaluation of these plans, focused primarily on evaluation questions, approaches, and methods. This meta-evaluation study seeks to understand the range of evaluations represented in the NICE portfolio, including descriptive information (what evaluations, questions, designs, approaches, and methods are applied?) and questions of value (do these evaluations meet the needs of projects and their staff, and of NASA/NICE?). In the current climate, as federal funders of climate change and STEM education projects seek to better understand and incorporate evaluation into their decisions, evaluators and project leaders are also seeking to build robust understanding of program effectiveness. Meta-evaluations like this provide some baseline understanding of the current status quo and the kinds of evaluations carried out within such funding portfolios. These explorations are needed to understand the common ground between evaluative best practices, limited resources, and agencies' desires, capacity, and requirements. When NASA asks for evaluation of funded projects, what happens? Which questions are asked and answered, using which tools? To what extent do the evaluations meet the needs of projects and program officers? How do they contribute to best practices in climate science education? These questions are important to ask about STEM and climate literacy work more generally; the NICE portfolio provides a broad test case for thinking strategically, critically, and progressively about evaluation in our community. Our findings can inform the STEM education, communication, and public outreach communities, and prompt us to consider a broad range of informative evaluation options. During this presentation, we will consider the breadth, depth and utility of evaluations conducted through a NASA climate education funding opportunity. We will examine the relationship between what we want to know about education programs, what we want to achieve with our interventions, and what we ask in our evaluations.

  17. What and How Are We Evaluating? Meta-Evaluation Study of the NASA Innovations in Climate Education (NICE) Portfolio

    NASA Astrophysics Data System (ADS)

    Martin, A. M.; Barnes, M. H.; Chambers, L. H.; Pippin, M. R.

    2013-12-01

    As part of NASA's Minority University Research and Education Program (MUREP), the NASA Innovations in Climate Education (NICE) project at Langley Research Center has funded 71 climate education initiatives since 2008. The funded initiatives span across the nation and contribute to the development of a climate-literate public and the preparation of a climate-related STEM workforce through research experiences, professional development opportunities, development of data access and modeling tools, and educational opportunities in both K-12 and higher education. Each of the funded projects proposes and carries out its own evaluation plan, in collaboration with external or internal evaluation experts. Using this portfolio as an exemplar case, NICE has undertaken a systematic meta-evaluation of these plans, focused primarily on evaluation questions, approaches, and methods. This meta-evaluation study seeks to understand the range of evaluations represented in the NICE portfolio, including descriptive information (what evaluations, questions, designs, approaches, and methods are applied?) and questions of value (do these evaluations meet the needs of projects and their staff, and of NASA/NICE?). In the current climate, as federal funders of climate change and STEM education projects seek to better understand and incorporate evaluation into their decisions, evaluators and project leaders are also seeking to build robust understanding of program effectiveness. Meta-evaluations like this provide some baseline understanding of the current status quo and the kinds of evaluations carried out within such funding portfolios. These explorations are needed to understand the common ground between evaluative best practices, limited resources, and agencies' desires, capacity, and requirements. When NASA asks for evaluation of funded projects, what happens? Which questions are asked and answered, using which tools? To what extent do the evaluations meet the needs of projects and program officers? How do they contribute to best practices in climate science education? These questions are important to ask about STEM and climate literacy work more generally; the NICE portfolio provides a broad test case for thinking strategically, critically, and progressively about evaluation in our community. Our findings can inform the STEM education, communication, and public outreach communities, and prompt us to consider a broad range of informative evaluation options. During this presentation, we will consider the breadth, depth and utility of evaluations conducted through a NASA climate education funding opportunity. We will examine the relationship between what we want to know about education programs, what we want to achieve with our interventions, and what we ask in our evaluations.

  18. Insight into Evaluation Practice: A Content Analysis of Designs and Methods Used in Evaluation Studies Published in North American Evaluation-Focused Journals

    ERIC Educational Resources Information Center

    Christie, Christina A.; Fleischer, Dreolin Nesbitt

    2010-01-01

    To describe the recent practice of evaluation, specifically method and design choices, the authors performed a content analysis on 117 evaluation studies published in eight North American evaluation-focused journals for a 3-year period (2004-2006). The authors chose this time span because it follows the scientifically based research (SBR)…

  19. An Electronic Competency-Based Evaluation Tool for Assessing Humanitarian Competencies in a Simulated Exercise.

    PubMed

    Evans, Andrea B; Hulme, Jennifer M; Nugus, Peter; Cranmer, Hilarie H; Coutu, Melanie; Johnson, Kirsten

    2017-06-01

    The evaluation tool was first derived from the formerly Consortium of British Humanitarian Agencies' (CBHA; United Kingdom), now "Start Network's," Core Humanitarian Competency Framework and formatted in an electronic data capture tool that allowed for offline evaluation. During a 3-day humanitarian simulation event, participants in teams of eight to 10 were evaluated individually at multiple injects by trained evaluators. Participants were assessed on five competencies and a global rating scale. Participants evaluated both themselves and their team members using the same tool at the end of the simulation exercise (SimEx). All participants (63) were evaluated. A total of 1,008 individual evaluations were completed. There were 90 (9.0%) missing evaluations. All 63 participants also evaluated themselves and each of their teammates using the same tool. Self-evaluation scores were significantly lower than peer-evaluations, which were significantly lower than evaluators' assessments. Participants with a medical degree, and those with humanitarian work experience of one month or more, scored significantly higher on all competencies assessed by evaluators compared to other participants. Participants with prior humanitarian experience scored higher on competencies regarding operating safely and working effectively as a team member. This study presents a novel electronic evaluation tool to assess individual performance in five of six globally recognized humanitarian competency domains in a 3-day humanitarian SimEx. The evaluation tool provides a standardized approach to the assessment of humanitarian competencies that cannot be evaluated through knowledge-based testing in a classroom setting. When combined with testing knowledge-based competencies, this presents an approach to a comprehensive competency-based assessment that provides an objective measurement of competency with respect to the competencies listed in the Framework. There is an opportunity to advance the use of this tool in future humanitarian training exercises and potentially in real time, in the field. This could impact the efficiency and effectiveness of humanitarian operations. Evans AB , Hulme JM , Nugus P , Cranmer HH , Coutu M , Johnson K . An electronic competency-based evaluation tool for assessing humanitarian competencies in a simulated exercise. Prehosp Disaster Med. 2017;32(3):253-260.

  20. On the automatic activation of attitudes: a quarter century of evaluative priming research.

    PubMed

    Herring, David R; White, Katherine R; Jabeen, Linsa N; Hinojos, Michelle; Terrazas, Gabriela; Reyes, Stephanie M; Taylor, Jennifer H; Crites, Stephen L

    2013-09-01

    Evaluation is a fundamental concept in psychological science. Limitations of self-report measures of evaluation led to an explosion of research on implicit measures of evaluation. One of the oldest and most frequently used implicit measurement paradigms is the evaluative priming paradigm developed by Fazio, Sanbonmatsu, Powell, and Kardes (1986). This paradigm has received extensive attention in psychology and is used to investigate numerous phenomena ranging from prejudice to depression. The current review provides a meta-analysis of a quarter century of evaluative priming research: 73 studies yielding 125 independent effect sizes from 5,367 participants. Because judgments people make in evaluative priming paradigms can be used to tease apart underlying processes, this meta-analysis examined the impact of different judgments to test the classic encoding and response perspectives of evaluative priming. As expected, evidence for automatic evaluation was found, but the results did not exclusively support either of the classic perspectives. Results suggest that both encoding and response processes likely contribute to evaluative priming but are more nuanced than initially conceptualized by the classic perspectives. Additionally, there were a number of unexpected findings that influenced evaluative priming such as segmenting trials into discrete blocks. We argue that many of the findings of this meta-analysis can be explained with 2 recent evaluative priming perspectives: the attentional sensitization/feature-specific attention allocation and evaluation window perspectives. (c) 2013 APA, all rights reserved.

  1. Stakeholder-focused evaluation of an online course for health care providers.

    PubMed

    Dunet, Diane O; Reyes, Michele

    2006-01-01

    Different people who have a stake or interest in a training course (stakeholders) may have markedly different definitions of what constitutes "training success" and how they will use evaluation results. Stakeholders at multiple levels within and outside of the organization guided the development of an evaluation plan for a Web-based training course on hemochromatosis. Stakeholder interests and values were reflected in the type, level, and rigor of evaluation methods selected. Our mixed-method evaluation design emphasized small sample sizes and repeated measures. Limited resources for evaluation were leveraged by focusing on the data needs of key stakeholders, understanding how they wanted to use evaluation results, and collecting data needed for stakeholder decision making. Regular feedback to key stakeholders provided opportunities for updating the course evaluation plan to meet emerging needs for new or different information. Early and repeated involvement of stakeholders in the evaluation process also helped build support for the final product. Involving patient advocacy groups, managers, and representative course participants improved the course and enhanced product dissemination. For training courses, evaluation planning is an opportunity to tailor methods and data collection to meet the information needs of particular stakeholders. Rigorous evaluation research of every training course may be infeasible or unwarranted; however, course evaluations can be improved by good planning. A stakeholder-focused approach can build a picture of the results and impact of training while fostering the practical use of evaluation data.

  2. Dipping Your Toes into Evaluation in Five Easy Steps: Tips, Tricks, and Lessons Learned

    NASA Astrophysics Data System (ADS)

    Martin, A. M.

    2013-04-01

    With limited funding, staffing, and resources for STEM education projects, the push for rigorous evaluation of our efforts offers up significant challenges, but opportunities as well. Evaluative thinking can enrich and improve the entire life cycle of an education, communication, or outreach project, and can take many forms other than a final, summative evaluation report. The community of attendees at the Astronomical Society of the Pacific will share an abundance of evaluation expertise, approaches, and results, but where does one turn if evaluation is a new concept or responsibility? This session will briefly highlight five tips, tricks, and lessons learned from the perspective of a novice and from a NASA project new to evaluation. The resources and ideas shared in the session will represent the concrete advice and driving ideas that put the author on firmer evaluative footing. Themes explored will include: (1) strategies for incorporating evaluative thinking early in the development of a project and throughout its life cycle; (2) the benefit of taking the time to elucidate a program's logic model of theory of action; (3) linking program activities to outcomes that are SMART (specific, measurable, attainable, relevant, and timely); (4) working with an external or internal evaluator; and (5) taking evaluation beyond the formal, final report. Finally, we'll close with resources to help individuals and their organizations learn more about evaluation and build their evaluation capacity.

  3. Researching evaluation influence: a review of the literature.

    PubMed

    Herbert, James Leslie

    2014-10-01

    The impact of an evaluation is an important consideration in designing and carrying out evaluations. Evaluation influence is a way of thinking about the effect that an evaluation can have in the broadest possible terms, which its proponents argue will lead to a systematic body of evidence about influential evaluation practices. This literature review sets out to address three research questions: How have researchers defined evaluation influence; how is this reflected in the research; and what does the research suggest about the utility of evaluation influence as a conceptual framework. Drawing on studies that had cited one of the key evaluation influence articles and conducted original research on some aspect of influence this article reviewed the current state of the literature toward the goal of developing a body of evidence about how to practice influential evaluation. Twenty-eight studies were found that have drawn on evaluation influence, which were categorized into (a) descriptive studies, (b) analytical studies, and (c) hypothesis testing. Despite the prominence of evaluation influence in the literature, there is slow progress toward a persuasive body of literature. Many of the studies reviewed offered vague and inconsistent definitions and have applied influence in an unspecified way in the research. It is hoped that this article will stimulate interest in the systematic study of influence mechanisms, leading to improvements in the potential for evaluation to affect positive social change. © The Author(s) 2014.

  4. Current status of quality evaluation of nursing care through director review and reflection from the Nursing Quality Control Centers

    PubMed Central

    Duan, Xia; Shi, Yan

    2014-01-01

    Background: The quality evaluation of nursing care is a key link in medical quality management. It is important and worth studying for the nursing supervisors to know the disadvantages during the process of quality evaluation of nursing care and then to improve the whole nursing quality. This study was to provide director insight on the current status of quality evaluation of nursing care from Nursing Quality Control Centers (NQCCs). Material and Methods: This qualitative study used a sample of 12 directors from NQCCs who were recruited from 12 provinces in China to evaluate the current status of quality evaluation of nursing care. Data were collected by in-depth interviews. Content analysis method was used to analyze the data. Results: Four themes emerged from the data: 1) lag of evaluation index; 2) limitations of evaluation content; 3) simplicity of evaluation method; 4) excessive emphasis on terminal quality. Conclusion: It is of great realistic significance to ameliorate nursing quality evaluation criteria, modify the evaluation content based on patient needs-oriented idea, adopt scientific evaluation method to evaluate nursing quality, and scientifically and reasonably draw horizontal comparisons of nursing quality between hospitals, as well as longitudinal comparisons of a hospital’s nursing quality. These methods mentioned above can all enhance a hospital’s core competitiveness and benefit more patients. PMID:25419427

  5. Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods.

    PubMed

    Pommier, Jeanine; Guével, Marie-Renée; Jourdan, Didier

    2010-01-28

    Schools are key settings for health promotion (HP) but the development of suitable approaches for evaluating HP in schools is still a major topic of discussion. This article presents a research protocol of a program developed to evaluate HP. After reviewing HP evaluation issues, the various possible approaches are analyzed and the importance of a realistic evaluation framework and a mixed methods (MM) design are demonstrated. The design is based on a systemic approach to evaluation, taking into account the mechanisms, context and outcomes, as defined in realistic evaluation, adjusted to our own French context using an MM approach. The characteristics of the design are illustrated through the evaluation of a nationwide HP program in French primary schools designed to enhance children's social, emotional and physical health by improving teachers' HP practices and promoting a healthy school environment. An embedded MM design is used in which a qualitative data set plays a supportive, secondary role in a study based primarily on a different quantitative data set. The way the qualitative and quantitative approaches are combined through the entire evaluation framework is detailed. This study is a contribution towards the development of suitable approaches for evaluating HP programs in schools. The systemic approach of the evaluation carried out in this research is appropriate since it takes account of the limitations of traditional evaluation approaches and considers suggestions made by the HP research community.

  6. Evaluation of Faculty

    PubMed Central

    Aburawi, Elhadi; McLean, Michelle; Shaban, Sami

    2014-01-01

    Objectives: Student evaluation of individual teachers is important in the quality improvement cycle. The aim of this study was to explore medical student and faculty perceptions of teacher evaluation in the light of dwindling participation in online evaluations. Methods: This study was conducted at the United Arab Emirates University College of Medicine & Health Sciences between September 2010 and June 2011. A 21-item questionnaire was used to investigate learner and faculty perceptions of teacher evaluation in terms of purpose, etiquette, confidentiality and outcome on a five-point Likert scale. Results: The questionnaire was completed by 54% of faculty and 23% of students. Faculty and students generally concurred that teachers should be evaluated by students but believed that the purpose of the evaluation should be explained. Despite acknowledging the confidentiality of online evaluation, faculty members were less sure that they would not recognise individual comments. While students perceived that the culture allowed objective evaluation, faculty members were less convinced. Although teachers claimed to take evaluation seriously, with Medical Sciences faculty members in particular indicating that they changed their teaching as a result of feedback, students were unsure whether teachers responded to feedback. Conclusion: Despite agreement on the value of evaluation, differences between faculty and student perceptions emerged in terms of confidentiality and whether evaluation led to improved practice. Educating both teachers and learners regarding the purpose of evaluation as a transparent process for quality improvement is imperative. PMID:25097772

  7. 48 CFR 236.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONTRACTS Special Aspects of Contracting for Construction 236.201 Evaluation of contractor performance. (a) Preparation of performance evaluation reports. Use DD Form 2626, Performance Evaluation (Construction... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Evaluation of contractor...

  8. Educational Evaluation: The State of the Field.

    ERIC Educational Resources Information Center

    Wolf, Richard M., Ed.

    1987-01-01

    Educational evaluation is discussed. Topics include: an evaluation framework, educational objectives and study design from a 20-year perspective, a sample study, educational evaluation for local school improvement, decision-oriented evaluation studies, reporting study results, and professional standards for assuring the quality of educational…

  9. 48 CFR 25.504 - Evaluation Examples.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation Examples. 25... PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504 Evaluation Examples. The following examples illustrate the application of the evaluation procedures in 25.502 and 25.503. The...

  10. 7 CFR 210.29 - Management evaluations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 4 2014-01-01 2014-01-01 false Management evaluations. 210.29 Section 210.29... AGRICULTURE CHILD NUTRITION PROGRAMS NATIONAL SCHOOL LUNCH PROGRAM Additional Provisions § 210.29 Management evaluations. (a) Management evaluations. FNS will conduct a comprehensive management evaluation of each State...

  11. The Evaluation of Teaching and Learning.

    ERIC Educational Resources Information Center

    Potocki-Malicet, Danielle; Holmesland, Icara; Estrela, Maria-Teresa; Veiga-Simao, Ana-Margarida

    1999-01-01

    Three different approaches to the evaluation of higher education in European and Scandinavian countries are examined: evaluation of a single discipline across institutions (Finland, Norway, Portugal, Spain, United Kingdom, northern Germany); evaluation of several disciplines within certain institutions (France, Germany); and internal evaluation of…

  12. Social Studies. MicroSIFT Courseware Evaluations.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This compilation of 11 courseware evaluations gives a general overview of available social studies microcomputer courseware for students in grades 3-12. Each evaluation lists title, date, producer, date of evaluation, evaluating institution, cost, ability level, topic, medium of transfer, required hardware, required software, instructional…

  13. Social Studies. Microsift Courseware Evaluations.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This compilation of 17 courseware evaluations gives a general overview of available social studies microcomputer courseware for students in grades 1-12. Each evaluation lists title, date, producer, date of evaluation, evaluating institution, cost, ability level, topic, medium of transfer, required hardware, required software, instructional…

  14. 48 CFR 225.504 - Evaluation examples.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Evaluation examples. 225..., DEPARTMENT OF DEFENSE SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 225.504 Evaluation examples. For examples that illustrate the evaluation procedures in 225.502(c)(ii...

  15. 7 CFR 210.29 - Management evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Management evaluations. 210.29 Section 210.29... AGRICULTURE CHILD NUTRITION PROGRAMS NATIONAL SCHOOL LUNCH PROGRAM Additional Provisions § 210.29 Management evaluations. (a) Management evaluations. FNS will conduct a comprehensive management evaluation of each State...

  16. Test techniques for evaluating flight displays

    NASA Technical Reports Server (NTRS)

    Haworth, Loran A.; Newman, Richard L.

    1993-01-01

    The rapid development of graphics technology allows for greater flexibility in aircraft displays, but display evaluation techniques have not kept pace. Historically, display evaluation has been based on subjective opinion and not on the actual aircraft/pilot performance. Existing electronic display specifications and evaluation techniques are reviewed. A display rating technique analogous to handling qualities ratings was developed and is recommended for future evaluations. The choice of evaluation pilots is also discussed and the use of a limited number of trained evaluators is recommended over the use of a large number of operational pilots.

  17. A Tentative Study on the Evaluation of Community Health Service Quality*

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-qiang; Zhu, Yong-yue

    Community health service is the key point of health reform in China. Based on pertinent studies, this paper constructed an indicator system for the community health service quality evaluation from such five perspectives as visible image, reliability, responsiveness, assurance and sympathy, according to service quality evaluation scale designed by Parasuraman, Zeithaml and Berry. A multilevel fuzzy synthetical evaluation model was constructed to evaluate community health service by fuzzy mathematics theory. The applicability and maneuverability of the evaluation indicator system and evaluation model were verified by empirical analysis.

  18. Fuzzy Evaluating Customer Satisfaction of Jet Fuel Companies

    NASA Astrophysics Data System (ADS)

    Cheng, Haiying; Fang, Guoyi

    Based on the market characters of jet fuel companies, the paper proposes an evaluation index system of jet fuel company customer satisfaction from five dimensions as time, business, security, fee and service. And a multi-level fuzzy evaluation model composing with the analytic hierarchy process approach and fuzzy evaluation approach is given. Finally a case of one jet fuel company customer satisfaction evaluation is studied and the evaluation results response the feelings of the jet fuel company customers, which shows the fuzzy evaluation model is effective and efficient.

  19. Policy evaluation and democracy: Do they fit?

    PubMed

    Sager, Fritz

    2017-08-05

    The papers assembled in this special issue shed light on the question of the interrelation between democracy and policy evaluation by discussing research on the use of evaluations in democratic processes. The collection makes a case for a stronger presence of evaluation in democracy beyond expert utilization. Parliamentarians prove to be more aquainted with evaluations than expected and the inclusion of evaluations in policy arguments increases the deliberative quality of democratic campaigns. In sum, evaluation and democracy turn out to be well compatible after all. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Risk of fever and sepsis evaluations after routine immunizations in the neonatal intensive care unit.

    PubMed

    Navar-Boggan, A M; Halsey, N A; Golden, W C; Escobar, G J; Massolo, M; Klein, N P

    2010-09-01

    Premature infants can experience cardiorespiratory events such as apnea after immunization in the neonatal intensive care unit (NICU). These changes in clinical status may precipitate sepsis evaluations. This study evaluated whether sepsis evaluations are increased after immunizations in the NICU. We conducted a retrospective cohort study of infants older than 53 days who were vaccinated in the NICU at the KPMCP (Kaiser Permanente Medical Care Program). Chart reviews were carried out before and after all immunizations were administered and for all sepsis evaluations after age 53 days. The clinical characteristics of infants on the day before receiving a sepsis evaluation were compared between children undergoing post-immunization sepsis evaluations and children undergoing sepsis evaluation at other times. The incidence rate of sepsis evaluations in the post-immunization period was compared with the rate in a control time period not following immunization using Poisson regression. A total of 490 infants met the inclusion criteria. The rate of fever was increased in the 24 h period after vaccination (2.3%, P<0.05). The incidence rate of sepsis evaluations was 40% lower after immunization than during the control period, although this was not statistically significant (P=0.09). Infants undergoing a sepsis evaluation after immunization were more likely to have an apneic, bradycardic or moderate-to-severe cardiorespiratory event in the day before the evaluation than were infants undergoing sepsis evaluations at other times (P<0.05). Despite an increase in fever and cardiorespiratory events after immunization in the NICU, routine vaccination was not associated with increased risk of receiving sepsis evaluations. Providers may be deferring immunizations until infants are clinically stable, or may have a higher threshold for initiating sepsis evaluations after immunization than at other times.

  1. A framework for evaluating community-based physical activity promotion programmes in Latin America.

    PubMed

    Schmid, Thomas L; Librett, John; Neiman, Andrea; Pratt, Michael; Salmon, Art

    2006-01-01

    A growing interest in promoting physical activity through multi-sectoral community-based programmes has highlighted the need for effective programme evaluation. Meeting in Rio de Janeiro, an international workgroup of behavioural, medical, public health and other scientists and practitioners endorsed the principle of careful evaluation of all programmes and in a consensus process developed the Rio de Janeiro Recommendations for Evaluation of Physical Activity Interventions". Among these recommendations and principles were that when possible, evaluation should 'built into' the programme from the beginning. The workgroup also called for adequate funding for evaluation, setting a goal of about 10% of programme resources for evaluation. The group also determined that evaluations should be developed in conjunction with and the results shared with all appropriate stakeholders in the programme; evaluations should be guided by ethical standards such as those proposed by the American Evaluation Association and should assess programme processes as well as outcomes; evaluation outcomes should be used to revise and refine ongoing programmes and guide decisions about programme continuation or expansion. It was also recognised that additional training in programme evaluation is needed and the Centers for Disease Control and Prevention's Physical Activity Evaluation Handbook could be easily adapted for use in culturally diverse communities, especially in Latin America. This paper describes a 6-step evaluation process and provides the full set of recommendations from the Rio de Janeiro Workgroup. The handbook has been translated and additional case studies from Colombia and Brazil have been added. Spanish and Portuguese language editions of the Evaluation Handbook are available from the Centers for Disease Control and Prevention, Physical Activity and Health Branch.

  2. Evaluating a Federated Medical Search Engine

    PubMed Central

    Belden, J.; Williams, J.; Richardson, B.; Schuster, K.

    2014-01-01

    Summary Background Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. Objectives To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. Methods This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Results Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants’ expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. Conclusions The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement. PMID:25298813

  3. A qualitative case study of evaluation use in the context of a collaborative program evaluation strategy in Burkina Faso.

    PubMed

    D'Ostie-Racine, Léna; Dagenais, Christian; Ridde, Valéry

    2016-05-26

    Program evaluation is widely recognized in the international humanitarian sector as a means to make interventions and policies more evidence based, equitable, and accountable. Yet, little is known about the way humanitarian non-governmental organizations (NGOs) actually use evaluations. The current qualitative evaluation employed an instrumental case study design to examine evaluation use (EU) by a humanitarian NGO based in Burkina Faso. This organization developed an evaluation strategy in 2008 to document the implementation and effects of its maternal and child healthcare user fee exemption program. Program evaluations have been undertaken ever since, and the present study examined the discourses of evaluation partners in 2009 (n = 15) and 2011 (n = 17). Semi-structured individual interviews and one group interview were conducted to identify instances of EU over time. Alkin and Taut's (Stud Educ Eval 29:1-12, 2003) conceptualization of EU was used as the basis for thematic qualitative analyses of the different forms of EU identified by stakeholders of the exemption program in the two data collection periods. Results demonstrated that stakeholders began to understand and value the utility of program evaluations once they were exposed to evaluation findings and then progressively used evaluations over time. EU was manifested in a variety of ways, including instrumental and conceptual use of evaluation processes and findings, as well as the persuasive use of findings. Such EU supported planning, decision-making, program practices, evaluation capacity, and advocacy. The study sheds light on the many ways evaluations can be used by different actors in the humanitarian sector. Conceptualizations of EU are also critically discussed.

  4. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  5. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  6. Evaluating the Healthy Start program. Design development to evaluative assessment.

    PubMed

    Raykovich, K S; McCormick, M C; Howell, E M; Devaney, B L

    1996-09-01

    The national evaluation of the federally funded Healthy Start program involved translating a design for a process and outcomes evaluation and standard maternal and infant data set, both developed prior to the national evaluation contract award, into an evaluation design and client data collection protocol that could be used to evaluate 15 diverse grantees. This article discusses the experience of creating a process and outcomes evaluation design that was both substantively and methodologically appropriate given such issues as the diversity of grantees and their community-based intervention strategies; the process of accessing secondary data sources, including vital records; the quality of client level data submissions; and the need to incorporate both qualitative and quantitative approaches into the evaluation design. The relevance of this experience for the conduct of other field studies of public health interventions is discussed.

  7. Foster youth evaluate the performance of group home services in California.

    PubMed

    Green, Rex S; Ellis, Peter T

    2008-05-01

    In 2003 foster youth employed by a foster youth advocacy organization suggested that an evaluation of group home services to foster youth be conducted in Alameda County, California. This report presents the development and conduct of this evaluation study; how funding was obtained; and how foster youth were hired, trained, and employed to produce a timely and informative evaluation of the performance of 32 group homes where some of the foster youth formerly resided. The results of the study are described in another paper. This report contributes to evaluation practice in the newly emerging field of youth-led evaluations. The achievements of this project in utilizing group home clients to evaluate services with which they were familiar may stimulate other evaluators to develop similar projects, thereby enriching the development of our youth and promoting more informative evaluation findings.

  8. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  9. 48 CFR 436.201 - Evaluation of contractor performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 436.201 Evaluation of contractor performance. Preparation of performance evaluation reports. In addition to the requirements of FAR 36.201, performance evaluation reports shall be prepared for indefinite... of services to be ordered exceeds $500,000.00. For these contracts, performance evaluation reports...

  10. 76 FR 51869 - Privacy Act Implementation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... & Evaluative Files Database,'' ``FHFA-OIG Investigative & Evaluative MIS Database,'' ``FHFA-OIG Hotline... or evaluative records to an individual who is the subject of an investigation or evaluation could... investigative or evaluative techniques and procedures. (iii) From 5 U.S.C. 552a(d)(2), because amendment or...

  11. Evaluating Tenured Teachers: A Practical Approach.

    ERIC Educational Resources Information Center

    DePasquale, Daniel, Jr.

    1990-01-01

    Teachers with higher order needs benefit from expressing their creativity and exercising valued skills. The evaluation process should encourage experienced teachers to grow professionally and move toward self-actualization. The suggested evaluation model includes an evaluation conference, a choice of evaluation method, a planning conference, an…

  12. Empowerment Evaluation: A Form of Self-Evaluation.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    Empowerment evaluation is an innovative approach that uses evaluation concepts and techniques to foster improvement and self-determination. Empowerment evaluation employs qualitative and quantitative methodologies. Although it can be applied to individuals and organizations, the usual focus is on programs. The value orientation of empowerment…

  13. Evaluating Teachers of Writing.

    ERIC Educational Resources Information Center

    Hult, Christine A., Ed.

    Describing the various forms evaluation can take, this book delineates problems in evaluating writing faculty and sets the stage for reconsidering the entire process to produce a fair, equitable, and appropriate system. The book discusses evaluation through real-life examples: evaluation of writing faculty by literature faculty, student…

  14. Evaluator Training Needs and Competencies: A Gap Analysis

    ERIC Educational Resources Information Center

    Galport, Nicole; Azzam, Tarek

    2017-01-01

    The systematic identification of evaluator competency training needs is crucial for the development of evaluators and the establishment of evaluation as a profession. Insight into essential competencies could help align training programs with field-specific needs, therefore clarifying expectations between evaluators, educators, and employers. This…

  15. PM Evaluation Guidelines.

    ERIC Educational Resources Information Center

    Bauch, Jerold P.

    This paper presents guidelines for the evaluation of candidate performance, the basic function of the evaluation component of the Georgia program model for the preparation of elementary school teachers. The three steps in the evaluation procedure are outlined: (1) proficiency module (PM) entry appraisal (pretest); (2) self evaluation and the…

  16. Evaluation of School Library Media Centers: Demonstrating Quality.

    ERIC Educational Resources Information Center

    Everhart, Nancy

    2003-01-01

    Discusses ways to evaluate school library media programs and how to demonstrate quality. Topics include how principals evaluate programs; sources of evaluative data; national, state, and local instruments; surveys and interviews; Colorado benchmarks; evaluating the use of electronic resources; and computer reporting options. (LRW)

  17. Formative and Summative Evaluation: Related Issues in Performance Measurement.

    ERIC Educational Resources Information Center

    Wholey, Joseph S.

    1996-01-01

    Performance measurement can serve both formative and summative evaluation functions. Formative evaluation is typically more useful for government purposes whereas performance measurement is more useful than one-shot evaluations of either formative or summative nature. Evaluators should study performance measurement through case studies and…

  18. On the Evaluation of Curriculum Reforms

    ERIC Educational Resources Information Center

    Hopmann, Stefan Thomas

    2003-01-01

    The paper considers the current international trend towards standards-based evaluation in a historical and comparative perspective. Based on a systematization of evaluation perspectives and tools, two basic patterns of curriculum control are discussed: process evaluation, and product evaluation. Whereas the first type has dominated the Continental…

  19. 48 CFR 3015.606-2 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Evaluation. 3015.606-2... Unsolicited Proposals 3015.606-2 Evaluation. (a) Comprehensive evaluations should be completed within sixty... contact point shall advise the offeror accordingly and provide a new evaluation completion date. The...

  20. 41 CFR 60-741.60 - Compliance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Compliance evaluations... evaluations. (a) OFCCP may conduct compliance evaluations to determine if the contractor maintains... with this part during employment. A compliance evaluation may consist of any one or any combination of...

  1. 48 CFR 852.273-72 - Alternative evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Alternative evaluation... Alternative evaluation. As prescribed in 873.110(c), insert the following provision: Alternative Evaluation... unbalanced. Evaluation of options shall not obligate the Government to exercise the option(s). (End of...

  2. Teacher Evaluation Reform Implementation and Labor Relations

    ERIC Educational Resources Information Center

    Pogodzinski, Ben; Umpstead, Regina; Witt, Jenifer

    2015-01-01

    The Michigan legislature recently enacted a teacher evaluation law which requires school districts to incorporate student achievement data into evaluation systems and mandated that evaluations be used to make high-stakes personnel decisions. Though administrators have considerable discretion to design and implement their evaluation systems, the…

  3. 10 CFR 712.15 - Management evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Management evaluation. 712.15 Section 712.15 Energy... Program Procedures § 712.15 Management evaluation. (a) Evaluation components. An evaluation by the HRP management official is required before an individual can be considered for initial certification or...

  4. The Future of Principal Evaluation

    ERIC Educational Resources Information Center

    Clifford, Matthew; Ross, Steven

    2012-01-01

    The need to improve the quality of principal evaluation systems is long overdue. Although states and districts generally require principal evaluations, research and experience tell that many state and district evaluations do not reflect current standards and practices for principals, and that evaluation is not systematically administered. When…

  5. 34 CFR 300.304 - Evaluation procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Placements Evaluations and Reevaluations § 300.304 Evaluation procedures. (a) Notice. The public agency must... evaluation, the public agency must— (1) Use a variety of assessment tools and strategies to gather relevant... procedures. Each public agency must ensure that— (1) Assessments and other evaluation materials used to...

  6. 12 CFR 614.4245 - Collateral evaluation policies.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Collateral evaluation policies. 614.4245... OPERATIONS Collateral Evaluation Requirements § 614.4245 Collateral evaluation policies. (a) The board of... shall adopt well-defined and effective collateral evaluation policies and standards, that comply with...

  7. 48 CFR 315.307 - Proposal revisions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... appropriate; and (2) An evaluation of technical factors by the technical evaluation panel, as necessary. The technical evaluation panel may rescore and re-rank technical proposals in the competitive range and prepare a technical evaluation report. To the extent practicable, the same evaluators who reviewed the...

  8. Participatory methods for Inuit public health promotion and programme evaluation in Nunatsiavut, Canada.

    PubMed

    Saini, Manpreet

    2017-01-01

    Engaging stakeholders is crucial for health promotion and programme evaluations; understanding how to best engage stakeholders is less clear, especially within Indigenous communities. The objectives of this thesis research were to use participatory methods to: (1) co-develop and evaluate a whiteboard video for use as a public health promotion tool in Rigolet, Nunatsiavut, and (2) develop and validate a framework for participatory evaluation of Inuit public health initiatives in Nunatsiavut, Labrador. Data collection tools included interactive workshops, community events, interviews, focus-group discussions and surveys. Results indicated the whiteboard video was an engaging and suitable medium for sharing public health messaging due to its contextually relevant elements. Participants identified 4 foundational evaluation framework components necessary to conduct appropriate evaluations, including: (1) community engagement, (2) collaborative evaluation development, (3) tailored evaluation data collection and (4) evaluation scope. This research illustrates stakeholder participation is critical to develop and evaluate contextually relevant public health initiatives in Nunatsiavut, Labrador and should be considered in other Indigenous communities.

  9. Measures of Success for Earth System Science Education: The DLESE Evaluation Services and the Evaluation Toolkit Collection

    NASA Astrophysics Data System (ADS)

    McCaffrey, M. S.; Buhr, S. M.; Lynds, S.

    2005-12-01

    Increased agency emphasis upon the integration of research and education coupled with the ability to provide students with access to digital background materials, learning activities and primary data sources has begun to revolutionize Earth science education in formal and informal settings. The DLESE Evaluation Services team and the related Evaluation Toolkit collection (http://www.dlese.org/cms/evalservices/ ) provides services and tools for education project leads and educators. Through the Evaluation Toolkit, educators may access high-quality digital materials to assess students' cognitive gains, examples of alternative assessments, and case studies and exemplars of authentic research. The DLESE Evaluation Services team provides support for those who are developing evaluation plans on an as-requested basis. In addition, the Toolkit provides authoritative peer reviewed articlesabout evaluation research techniques and strategies of particular importance to geoscience education. This paper will provide an overview of the DLESE Evaluation Toolkit and discuss challenges and best practices for assessing student learning and evaluating Earth system sciences education in a digital world.

  10. Image quality evaluation of full reference algorithm

    NASA Astrophysics Data System (ADS)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  11. Issues in evaluation: evaluating assessments of elderly people using a combination of methods.

    PubMed

    McEwan, R T

    1989-02-01

    In evaluating a health service, individuals will give differing accounts of its performance, according to their experiences of the service, and the evaluative perspective they adopt. The value of a service may also change through time, and according to the particular part of the service studied. Traditional health care evaluations have generally not accounted for this variability because of the approaches used. Studies evaluating screening or assessment programmes for the elderly have focused on programme effectiveness and efficiency, using relatively inflexible quantitative methods. Evaluative approaches must reflect the complexity of health service provision, and methods must vary to suit the particular research objective. Under these circumstances, this paper presents the case for the use of multiple triangulation in evaluative research, where differing methods and perspectives are combined in one study. Emphasis is placed on the applications and benefits of subjectivist approaches in evaluation. An example of combined methods is provided in the form of an evaluation of the Newcastle Care Plan for the Elderly.

  12. Engineering flight and guest pilot evaluation report, phase 2. [DC 8 aircraft

    NASA Technical Reports Server (NTRS)

    Morrison, J. A.; Anderson, E. B.; Brown, G. W.; Schwind, G. K.

    1974-01-01

    Prior to the flight evaluation, the two-segment profile capabilities of the DC-8-61 were evaluated and flight procedures were developed in a flight simulator at the UA Flight Training Center in Denver, Colorado. The flight evaluation reported was conducted to determine the validity of the simulation results, further develop the procedures and use of the area navigation system in the terminal area, certify the system for line operation, and obtain evaluations of the system and procedures by a number of pilots from the industry. The full area navigation capabilities of the special equipment installed were developed to provide terminal area guidance for two-segment approaches. The objectives of this evaluation were: (1) perform an engineering flight evaluation sufficient to certify the two-segment system for the six-month in-service evaluation; (2) evaluate the suitability of a modified RNAV system for flying two-segment approaches; and (3) provide evaluation of the two-segment approach by management and line pilots.

  13. Single-case synthesis tools I: Comparing tools to evaluate SCD quality and rigor.

    PubMed

    Zimmerman, Kathleen N; Ledford, Jennifer R; Severini, Katherine E; Pustejovsky, James E; Barton, Erin E; Lloyd, Blair P

    2018-03-03

    Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional Children, What Works Clearinghouse, and Single-Case Analysis and Design Framework) were compared to determine if conclusions regarding the effectiveness of antecedent sensory-based interventions for young children changed based on choice of quality evaluation tool. Evaluation of SCD quality differed across tools, suggesting selection of quality evaluation tools impacts evaluation findings. Suggestions for selecting an appropriate quality and rigor assessment tool are provided and across-tool conclusions are drawn regarding the quality and rigor of studies. Finally, authors provide guidance for using quality evaluations in conjunction with outcome analyses when conducting syntheses of interventions evaluated in the context of SCD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Status and the evaluation of workplace deviance.

    PubMed

    Bowles, Hannah Riley; Gelfand, Michele

    2010-01-01

    Bias in the evaluation of workplace misbehavior is hotly debated in courts and corporations, but it has received little empirical attention. Classic sociological literature suggests that deviance by lower-status actors will be evaluated more harshly than deviance by higher-status actors. However, more recent psychological literature suggests that discrimination in the evaluation of misbehavior may be moderated by the relative status of the evaluator because status influences both rule observance and attitudes toward social hierarchy. In Study 1, the psychological experience of higher status decreased rule observance and increased preferences for social hierarchy, as we theorized. In three subsequent experiments, we tested the hypothesis that higher-status evaluators would be more discriminating in their evaluations of workplace misbehavior, evaluating fellow higher-status deviants more leniently than lower-status deviants. Results supported the hypothesized interactive effect of evaluator status and target status on the evaluation of workplace deviance, when both achieved status characteristics (Studies 2a and 2b) and ascribed status characteristics (i.e., race and gender in Study 3) were manipulated.

  15. Comparative analysis of perceptual evaluation, acoustic analysis and indirect laryngoscopy for vocal assessment of a population with vocal complaint.

    PubMed

    Nemr, Kátia; Amar, Ali; Abrahão, Marcio; Leite, Grazielle Capatto de Almeida; Köhle, Juliana; Santos, Alexandra de O; Correa, Luiz Artur Costa

    2005-01-01

    As a result of technology evolution and development, methods of voice evaluation have changed both in medical and speech and language pathology practice. To relate the results of perceptual evaluation, acoustic analysis and medical evaluation in the diagnosis of vocal and/or laryngeal affections of the population with vocal complaint. Clinical prospective. 29 people that attended vocal health protection campaign were evaluated. They were submitted to perceptual evaluation (AFPA), acoustic analysis (AA), indirect laryngoscopy (LI) and telelaryngoscopy (TL). Correlations between medical and speech language pathology evaluation methods were established, verifying possible statistical signification with the application of Fischer Exact Test. There were statistically significant results in the correlation between AFPA and LI, AFPA and TL, LI and TL. This research study conducted in a vocal health protection campaign presented correlations between speech language pathology evaluation and perceptual evaluation and clinical evaluation, as well as between vocal affection and/or laryngeal medical exams.

  16. Can a workbook work? Examining whether a practitioner evaluation toolkit can promote instrumental use.

    PubMed

    Campbell, Rebecca; Townsend, Stephanie M; Shaw, Jessica; Karim, Nidal; Markowitz, Jenifer

    2015-10-01

    In large-scale, multi-site contexts, developing and disseminating practitioner-oriented evaluation toolkits are an increasingly common strategy for building evaluation capacity. Toolkits explain the evaluation process, present evaluation design choices, and offer step-by-step guidance to practitioners. To date, there has been limited research on whether such resources truly foster the successful design, implementation, and use of evaluation findings. In this paper, we describe a multi-site project in which we developed a practitioner evaluation toolkit and then studied the extent to which the toolkit and accompanying technical assistance was effective in promoting successful completion of local-level evaluations and fostering instrumental use of the findings (i.e., whether programs directly used their findings to improve practice, see Patton, 2008). Forensic nurse practitioners from six geographically dispersed service programs completed methodologically rigorous evaluations; furthermore, all six programs used the findings to create programmatic and community-level changes to improve local practice. Implications for evaluation capacity building are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. [Self-acceptance as adaptively resigning the self to low self-evaluation].

    PubMed

    Ueda, T

    1996-10-01

    In past studies, the concept of self-acceptance has often been confused with self-evaluation or self-esteem. The purpose of this study was to distinguish these concepts, and operationally define self-acceptance as Carl Rogers proposed: feeling all right toward the self when self-evaluation was low. Self-acceptance as adaptive resignation, a moderating variable, therefore should raise self-esteem of only those people with low self-evaluation. Self-acceptance was measured in the study as affirmative evaluation of own self-evaluation. Two hundred and forty college students, 120 each for men and women, completed a questionnaire of self-evaluative consciousness and self-esteem scales. Results of statistical analyses showed that among subjects with low self-evaluation, the higher self-acceptance, the higher the person's self-esteem. The same relation was not observed among those with high self-evaluation. Thus, it may be concluded that self-acceptance was adaptive resignation, and therefore meaningful to only those with low self-evaluation.

  18. Revisiting a theory of negotiation: the utility of Markiewicz (2005) proposed six principles.

    PubMed

    McDonald, Diane

    2008-08-01

    People invited to participate in an evaluation process will inevitably come from a variety of personal backgrounds and hold different views based on their own lived experience. However, evaluators are in a privileged position because they have access to information from a wide range of sources and can play an important role in helping stakeholders to hear and appreciate one another's opinions and ideas. Indeed, in some cases a difference in perspective can be utilised by an evaluator to engage key stakeholders in fruitful discussion that can add value to the evaluation outcome. In other instances the evaluator finds that the task of facilitating positive interaction between multiple stakeholders is just 'an uphill battle' and so conflict, rather than consensus, occurs as the evaluation findings emerge and are debated. As noted by Owen [(2006) PROGRAM EVALUATION: Forms and approaches (3rd ed.). St. Leonards, NSW: Allen & Unwin] and other eminent evaluators before him [Fetterman, D. M. (1996). Empowerment evaluation: An introduction to theory and practice. In D. M. Fetterman, S. J. Kaftarian, & A. Wandersman (Eds.), Empowerment evaluation: Knowledge and tools for self-assessment and accountability (pp. 3-46). Thousand Oaks, CA: Sage Publications; Patton, M. Q. (1997). Utilization-focused evaluation (3rd ed.). Thousand Oaks, CA: Sage Publications; Stake, R. A. (1983). Stakeholder influence in the evaluation of cities-in-schools. New Directions for Program Evaluation, 17, 15-30], conflict in an evaluation process is not unexpected. The challenge is for evaluators to facilitate dialogue between people who hold strongly opposing views, with the aim of helping them to achieve a common understanding of the best way forward. However, this does not imply that consensus will be reached [Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage]. What is essential is that the evaluator assists the various stakeholders to recognise and accept their differences and be willing to move on. But the problem is that evaluators are not necessarily equipped with the technical or personal skills required for effective negotiation. In addition, the time and effort that are required to undertake this mediating role are often not sufficiently understood by those who commission a review. With such issues in mind Markiewicz, A. [(2005). A balancing act: Resolving multiple stakeholder interests in program evaluation. Evaluation Journal of Australasia, 4(1-2), 13-21] has proposed six principles upon which to build a case for negotiation to be integrated into the evaluation process. This paper critiques each of these principles in the context of an evaluation undertaken of a youth program. In doing so it challenges the view that stakeholder consensus is always possible if program improvement is to be achieved. This has led to some refinement and further extension of the proposed theory of negotiation that is seen to be instrumental to the role of an evaluator.

  19. A Study of Crowd Ability and its Influence on Crowdsourced Evaluation of Design Concepts

    DTIC Science & Technology

    2014-05-01

    identifies the experts from the crowd, under the assumptions that ( 1 ) experts do exist and (2) only experts have consistent evaluations. These assumptions...for design evaluation tasks . Keywords: crowdsourcing, design evaluation, sparse evaluation ability, machine learning ∗Corresponding author. 1 ...intelligence” of a much larger crowd of people with diverse backgrounds [ 1 ]. Crowdsourced evaluation, or the delegation of an eval- uation task to a

  20. A methodology for evaluating the usability of audiovisual consumer electronic products.

    PubMed

    Kwahk, Jiyoung; Han, Sung H

    2002-09-01

    Usability evaluation is now considered an essential procedure in consumer product development. Many studies have been conducted to develop various techniques and methods of usability evaluation hoping to help the evaluators choose appropriate methods. However, planning and conducting usability evaluation requires considerations of a number of factors surrounding the evaluation process including the product, user, activity, and environmental characteristics. In this perspective, this study suggested a new methodology of usability evaluation through a simple, structured framework. The framework was outlined by three major components: the interface features of a product as design variables, the evaluation context consisting of user, product, activity, and environment as context variables, and the usability measures as dependent variables. Based on this framework, this study established methods to specify the product interface features, to define evaluation context, and to measure usability. The effectiveness of this methodology was demonstrated through case studies in which the usability of audiovisual products was evaluated by using the methods developed in this study. This study is expected to help the usability practitioners in consumer electronics industry in various ways. Most directly, it supports the evaluators' plan and conduct usability evaluation sessions in a systematic and structured manner. In addition, it can be applied to other categories of consumer products (such as appliances, automobiles, communication devices, etc.) with minor modifications as necessary.

  1. Challenges of teacher-based clinical evaluation from nursing students' point of view: Qualitative content analysis.

    PubMed

    Sadeghi, Tabandeh; Seyed Bagheri, Seyed Hamid

    2017-01-01

    Clinical evaluation is very important in the educational system of nursing. One of the most common methods of clinical evaluation is evaluation by the teacher, but the challenges that students would face in this evaluation method, have not been mentioned. Thus, this study aimed to explore the experiences and views of nursing students about the challenges of teacher-based clinical evaluation. This study was a descriptive qualitative study with a qualitative content analysis approach. Data were gathered through semi-structured focused group sessions with undergraduate nursing students who were passing their 8 th semester at Rafsanjan University of Medical Sciences. Date were analyzed using Graneheim and Lundman's proposed method. Data collection and analysis were concurrent. According to the findings, "factitious evaluation" was the main theme of study that consisted of three categories: "Personal preferences," "unfairness" and "shirking responsibility." These categories are explained using quotes derived from the data. According to the results of this study, teacher-based clinical evaluation would lead to factitious evaluation. Thus, changing this approach of evaluation toward modern methods of evaluation is suggested. The finding can help nursing instructors to get a better understanding of the nursing students' point of view toward this evaluation approach and as a result could be planning for changing of this approach.

  2. On the evaluation of social innovations and social enterprises: Recognizing and integrating two solitudes in the empirical knowledge base.

    PubMed

    Szijarto, Barbara; Milley, Peter; Svensson, Kate; Cousins, J Bradley

    2018-02-01

    Social innovation (SI) is billed as a new way to address complex social problems. Interest in SI has intensified rapidly in the last decade, making it an important area of practice for evaluators, but a difficult one to navigate. Learning from developments in SI and evaluation approaches applied in SI contexts is challenging because of 'fuzzy' concepts and silos of activity and knowledge within SI communities. This study presents findings from a systematic review and integration of 41 empirical studies on evaluation in SI contexts. We identify two isolated conversations: one about 'social enterprises' (SEs) and the other about non-SE 'social innovations'. These conversations diverge in key areas, including engagement with evaluation scholarship, and in the reported purposes, approaches and use of evaluation. We identified striking differences with respect to degree of interest in collaborative approaches and facilitation of evaluation use. The findings speak to trends and debates in our field, for example how evaluation might reconcile divergent information needs in multilevel, cross-sectoral collaborations and respond to fluidity and change in innovative settings. Implications for practitioners and commissioners of evaluation include how evaluation is used in different contexts and the voice of evaluators (and the evaluation profession) in these conversations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Using QALYs in telehealth evaluations: a systematic review of methodology and transparency.

    PubMed

    Bergmo, Trine S

    2014-08-03

    The quality-adjusted life-year (QALY) is a recognised outcome measure in health economic evaluations. QALY incorporates individual preferences and identifies health gains by combining mortality and morbidity into one single index number. A literature review was conducted to examine and discuss the use of QALYs to measure outcomes in telehealth evaluations. Evaluations were identified via a literature search in all relevant databases. Only economic evaluations measuring both costs and QALYs using primary patient level data of two or more alternatives were included. A total of 17 economic evaluations estimating QALYs were identified. All evaluations used validated generic health related-quality of life (HRQoL) instruments to describe health states. They used accepted methods for transforming the quality scores into utility values. The methodology used varied between the evaluations. The evaluations used four different preference measures (EQ-5D, SF-6D, QWB and HUI3), and utility scores were elicited from the general population. Most studies reported the methodology used in calculating QALYs. The evaluations were less transparent in reporting utility weights at different time points and variability around utilities and QALYs. Few made adjustments for differences in baseline utilities. The QALYs gained in the reviewed evaluations varied from 0.001 to 0.118 in implying a small but positive effect of telehealth intervention on patient's health. The evaluations reported mixed cost-effectiveness results. The use of QALYs in telehealth evaluations has increased over the last few years. Different methodologies and utility measures have been used to calculate QALYs. A more harmonised methodology and utility measure is needed to ensure comparability across telehealth evaluations.

  4. Can Principals Promote Teacher Development as Evaluators? A Case Study of Principals’ Views and Experiences

    PubMed Central

    Kraft, Matthew A.; Gilmour, Allison

    2017-01-01

    Purpose New teacher evaluation systems have expanded the role of principals as instructional leaders, but little is known about principals’ ability to promote teacher development through the evaluation process. We conducted a case study of principals’ perspectives on evaluation and their experiences implementing observation and feedback cycles to better understand whether principals feel as though they are able to promote teacher development as evaluators. Research Methods We conducted interviews with a stratified random sample of 24 principals in an urban district that recently implemented major reforms to its teacher evaluation system. We analyzed these interviews by drafting thematic summaries, coding interview transcripts, creating data-analytic matrices, and writing analytic memos. Findings We found that the evaluation reforms provided a common framework and language that helped facilitate principals’ feedback conversations with teachers. However, we also found that tasking principals with primary responsibility for conducting evaluations resulted in a variety of unintended consequences which undercut the quality of evaluation feedback they provided. We analyze five broad solutions to these challenges: strategically targeting evaluations, reducing operational responsibilities, providing principal training, hiring instructional coaches, and developing peer evaluation systems. Implications The quality of feedback teachers receive through the evaluation process depends critically on the time and training evaluators have to provide individualized and actionable feedback. Districts that task principals with primary responsibility for conducting observation and feedback cycles must attend to the many implementation challenges associated with this approach in order for next-generation evaluation systems to successfully promote teacher development. PMID:28729742

  5. Multiple and mixed methods in formative evaluation: Is more better? Reflections from a South African study.

    PubMed

    Odendaal, Willem; Atkins, Salla; Lewin, Simon

    2016-12-15

    Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the 'black box' of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. The evaluation's qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers' scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However, their value is not in the number of methods used, but in how each method matches the evaluation questions and the scientific integrity with which the methods are selected and implemented.

  6. Standardizing the evaluation criteria on treatment outcomes of mandibular implant overdentures: a systematic review

    PubMed Central

    Kim, Ha-Young; Shin, Sang-Wan

    2014-01-01

    PURPOSE The aim of this review was to analyze the evaluation criteria on mandibular implant overdentures through a systematic review and suggest standardized evaluation criteria. MATERIALS AND METHODS A systematic literature search was conducted by PubMed search strategy and hand-searching of relevant journals from included studies considering inclusion and exclusion criteria. Randomized clinical trials (RCT) and clinical trial studies comparing attachment systems on mandibular implant overdentures until December, 2011 were selected. Twenty nine studies were finally selected and the data about evaluation methods were collected. RESULTS Evaluation criteria could be classified into 4 groups (implant survival, peri-implant tissue evaluation, prosthetic evaluation, and patient satisfaction). Among 29 studies, 21 studies presented implant survival rate, while any studies reporting implant failure did not present cumulative implant survival rate. Seventeen studies evaluating peri-implant tissue status presented following items as evaluation criteria; marginal bone level (14), plaque Index (13), probing depth (8), bleeding index (8), attachment gingiva level (8), gingival index (6), amount of keratinized gingiva (1). Eighteen studies evaluating prosthetic maintenance and complication also presented following items as evaluation criteria; loose matrix (17), female detachment (15), denture fracture (15), denture relining (14), abutment fracture (14), abutment screw loosening (11), and occlusal adjustment (9). Atypical questionnaire (9), Visual analog scales (VAS) (4), and Oral Health Impact Profile (OHIP) (1) were used as the format of criteria to evaluate patients satisfaction in 14 studies. CONCLUSION For evaluation of implant overdenture, it is necessary to include cumulative survival rate for implant evaluation. It is suggested that peri-implant tissue evaluation criteria include marginal bone level, plaque index, bleeding index, probing depth, and attached gingiva level. It is also suggested that prosthetic evaluation criteria include loose matrix, female detachment, denture fracture, denture relining, abutment fracture, abutment screw loosening, and occlusal adjustment. Finally standardized criteria like OHIP-EDENT or VAS are required for patient satisfaction. PMID:25352954

  7. Evaluating Educational Programmes: The Need and the Response.

    ERIC Educational Resources Information Center

    Stake, Robert E.

    This survey of recent developments in educational program evaluation is intended for persons who commission, implement, direct, or carry out evaluation studies. The attitudes of government officials, educators, and researchers toward assessment and their own evaluation needs are discussed. Various approaches to evaluation are briefly described;…

  8. 48 CFR 3052.216-72 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Performance evaluation... CONTRACT CLAUSES Text of Provisions and Clauses 3052.216-72 Performance evaluation plan. As prescribed in... Evaluation Plan (DEC 2003) (a) A Performance Evaluation Plan shall be unilaterally established by the...

  9. 48 CFR 8.606 - Evaluating FPI performance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... 8.606 Evaluating FPI performance. Agencies shall evaluate FPI contract performance in accordance with subpart 42.15. Performance evaluations do not negate the requirements of 8.602 and 8.604, but they... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluating FPI performance...

  10. 48 CFR 1252.216-72 - Performance evaluation plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Performance evaluation....216-72 Performance evaluation plan. As prescribed in (TAR) 48 CFR 1216.406(b), insert the following clause: Performance Evaluation Plan (OCT 1994) (a) A Performance Evaluation Plan shall be unilaterally...

  11. How Can Multi-Site Evaluations Be Participatory?

    ERIC Educational Resources Information Center

    Lawrenz, Frances; Huffman, Douglas

    2003-01-01

    Multi-site evaluations are becoming increasingly common in federal funding portfolios. Although much thought has been given to multi-site evaluation, there has been little emphasis on how it might interact with participatory evaluation. Therefore, this paper reviews several National Science Foundation educational, multi-site evaluations for the…

  12. Forestry research evaluation: current progress, future directions.

    Treesearch

    Christopher D. Risbrudt; Pamela J. Jakes

    1985-01-01

    Research evaluation is a relatively recent endeavor in forestry economics. This workshop represents most of the current and recently completed studies available in this subfield of forestry and evaluation. Also included are discussions of scientists and policymakers concerning the uses of forestry research evaluations, evaluation problems encountered, solutions...

  13. Broadening the Discussion about Evaluator Advocacy

    ERIC Educational Resources Information Center

    Hendricks, Michael

    2014-01-01

    This issue of "American Journal of Evaluation" presents commentaries by evaluation leaders, George Grob and Rakesh Mohan, which draw upon their wealth of practical experience to address questions about evaluator advocacy, including What is meant by the word "advocacy"? Should evaluators ever advocate? If so, when and how? What…

  14. Urban Transportation Planning Short Course: Evaluation of Alternative Transportation Systems.

    ERIC Educational Resources Information Center

    Federal Highway Administration (DOT), Washington, DC.

    This urban transportation pamphlet delves into the roles of policy groups and technical staffs in evaluating alternative transportation plans, evaluation criteria, systems to evaluate, and evaluation procedures. The introduction admits the importance of subjective, but informed, judgment as an effective tool in weighing alternative transportation…

  15. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…

  16. Reconceptualizing Evaluator Roles

    ERIC Educational Resources Information Center

    Skolits, Gary J.; Morrow, Jennifer Ann; Burr, Erin Mehalic

    2009-01-01

    The current evaluation literature tends to conceptualize evaluator roles as a single, overarching orientation toward an evaluation, an orientation largely driven by evaluation methods, models, or stakeholder orientations. Roles identified range from a social transformer or a neutral social scientist to that of an educator or even a power merchant.…

  17. Educational Evaluation: Analysis and Responsibility.

    ERIC Educational Resources Information Center

    Apple, Michael W., Ed.; And Others

    This book presents controversial aspects of evaluation and aims at broadening perspectives and insights in the evaluation field. Chapter 1 criticizes modes of evaluation and the basic rationality behind them and focuses on assumptions that have problematic consequences. Chapter 2 introduces concepts of evaluation and examines methods of grading…

  18. 22 CFR 1701.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Self-evaluation. 1701.110 Section 1701.110...-evaluation. (a) The agency shall, by November 28, 1994, evaluate its current policies and practices, and the... handicaps or organizations representing individuals with handicaps, to participate in the self-evaluation...

  19. 25 CFR 720.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Self-evaluation. 720.110 Section 720.110 Indians THE...-evaluation. (a) The agency shall, by August 24, 1987, evaluate its current policies and practices, and the... or organizations representing handicapped persons, to participate in the self-evaluation process by...

  20. 48 CFR 17.206 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation. 17.206 Section... CONTRACT TYPES SPECIAL CONTRACTING METHODS Options 17.206 Evaluation. (a) In awarding the basic contract... officer need not evaluate offers for any option quantities when it is determined that evaluation would not...

  1. 10 CFR 709.10 - Scope of a counterintelligence evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Scope of a counterintelligence evaluation. 709.10 Section 709.10 Energy DEPARTMENT OF ENERGY COUNTERINTELLIGENCE EVALUATION PROGRAM CI Evaluation Protocols and Protection of National Security § 709.10 Scope of a counterintelligence evaluation. A counterintelligence...

  2. 10 CFR 709.10 - Scope of a counterintelligence evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Scope of a counterintelligence evaluation. 709.10 Section 709.10 Energy DEPARTMENT OF ENERGY COUNTERINTELLIGENCE EVALUATION PROGRAM CI Evaluation Protocols and Protection of National Security § 709.10 Scope of a counterintelligence evaluation. A counterintelligence...

  3. Using an Evaluation Hotline to Promote Stakeholder Involvement

    ERIC Educational Resources Information Center

    Skolits, Gary J.; Boser, Judith A.

    2008-01-01

    This article addresses the design and application of a hotline to promote broader community-wide participation in a public school evaluation. Evaluations of community resources such as public schools present evaluators with challenges from the perspective of promoting stakeholder involvement. Although many evaluation stakeholders are readily…

  4. Teacher Evaluation.

    ERIC Educational Resources Information Center

    Saif, Philip

    This article examines why teachers should be evaluated, how teacher evaluation is perceived, and how teacher evaluation can be approached, focusing on the improvement of teacher competency rather than defining a teacher as "good" or "bad." Since the primary professional activity of a teacher is teaching, the major concern of teacher evaluation is…

  5. 75 FR 2556 - Extension of Agency Information Collection Activity Under OMB Review: Transportation Security...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... Information Collection Activity Under OMB Review: Transportation Security Officer (TSO) Medical Questionnaire... Evaluation, Cardiac Further Evaluation, Diabetes Further Evaluation, Drug or Alcohol Use Further Evaluation... evaluate a candidate's physical and medical qualifications to be a TSO, including visual and aural acuity...

  6. Adaptation, Evaluation and Inclusion

    ERIC Educational Resources Information Center

    Basson, R.

    2011-01-01

    In this article I reflect on a recent development currently shaping programme evaluation as field, which makes the case for evaluators facilitating evaluation training evaluees to self-evaluate and improve the programmes they teach. Fetterman argues persuasively that the practice was incipient in the field and required formalization and acceptance…

  7. Evaluate Yourself. Evaluation: Research-Based Decision Making Series, Number 9304.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    This document considers both self-examination and external evaluation of gifted and talented education programs. Principles of the self-examination process are offered, noting similarities to external evaluation models. Principles of self-evaluation efforts include the importance of maintaining a nonjudgmental orientation, soliciting views from…

  8. Einstein as Evaluator?

    ERIC Educational Resources Information Center

    Caulley, Darrel N.

    1982-01-01

    Like any other person, Albert Einstein was an informal evaluator, engaged in placing value on various aspects of his life, work, and the world. Based on Einstein's own statements, this paper speculates about what Einstein would have been like as a connoisseur evaluator, a conceptual evaluator, or a responsive evaluator. (Author/BW)

  9. Developing Evaluation Capacity through Process Use

    ERIC Educational Resources Information Center

    King, Jean A.

    2007-01-01

    This article discusses how to make process use an independent variable in evaluation practice: the purposeful means of building an organization's capacity to conduct and use evaluations in the long run. The goal of evaluation capacity building (ECB) is to strengthen and sustain effective program evaluation practices through a number of activities:…

  10. Connected Vehicle Pilot Deployment Program Independent Evaluation: Mobility, Environmental, and Public Agency Efficiency Refined Evaluation Plan - New York City

    DOT National Transportation Integrated Search

    2018-03-01

    The purpose of this report is to provide a refined evaluation plan detailing the approach to be used by the Texas A&M Transportation Institute Connected Vehicle Pilot Deployment Evaluation Team for evaluating the mobility, environmental, and public a...

  11. Practice Parameter for Child and Adolescent Forensic Evaluations

    ERIC Educational Resources Information Center

    Journal of the American Academy of Child & Adolescent Psychiatry, 2011

    2011-01-01

    This Parameter addresses the key concepts that differentiate the forensic evaluation of children and adolescents from a clinical assessment. There are ethical issues unique to the forensic evaluation, because the forensic evaluator's duty is to the person, court, or agency requesting the evaluation, rather than to the patient. The forensic…

  12. Student Evaluation of Teaching: Keeping in Touch with Reality

    ERIC Educational Resources Information Center

    Palmer, Stuart

    2012-01-01

    Student evaluation of teaching is commonplace in many universities and may be the predominant input into the performance evaluation of staff and organisational units. This article used publicly available student evaluation of teaching data to present examples of where institutional responses to evaluation processes appeared to be educationally…

  13. The Practice of Evaluation Research and the Use of Evaluation Results.

    ERIC Educational Resources Information Center

    Van den Berg, G.; Hoeben, W. Th. J. G.

    1984-01-01

    Lack of use of educational evaluation results in the Netherlands was investigated by analyzing 14 curriculum evaluation studies. Results indicated that rational decision making with a technical (empirical) evaluation approach makes utilization of results most likely. Incremental decision making and a conformative approach make utilization least…

  14. Evaluative Priming of Naming and Semantic Categorization Responses Revisited: A Mutual Facilitation Explanation

    ERIC Educational Resources Information Center

    Schmitz, Melanie; Wentura, Dirk

    2012-01-01

    The evaluative priming effect (i.e., faster target responses following evaluatively congruent compared with evaluatively incongruent primes) in nonevaluative priming tasks (such as naming or semantic categorization tasks) is considered important for the question of how evaluative connotations are represented in memory. However, the empirical…

  15. 42 CFR 483.128 - PASARR evaluation criteria.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... PASARR program must use at least the evaluative criteria of § 483.130 (if one or both determinations can... of all data concerning the individual. (g) Preexisting data. Evaluators may use relevant evaluative... determinations, findings must be issued in the form of a written evaluative report which— (1) Identifies the name...

  16. 42 CFR 483.128 - PASARR evaluation criteria.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... PASARR program must use at least the evaluative criteria of § 483.130 (if one or both determinations can... of all data concerning the individual. (g) Preexisting data. Evaluators may use relevant evaluative... determinations, findings must be issued in the form of a written evaluative report which— (1) Identifies the name...

  17. Principal Evaluation in Indiana: Practitioners' Perceptions of a New Statewide Model

    ERIC Educational Resources Information Center

    Andrews, Kelly A.; Boyland, Lori G.; Quick, Marilynn M.

    2016-01-01

    This study examines administrators' perspectives of a state-developed principal evaluation model adopted by a majority of Indiana school districts after legislation mandated policy reform in educator evaluation. Feedback was gathered from public school superintendents (the evaluators) and principals (those being evaluated), with 364 participants.…

  18. 42 CFR 483.128 - PASARR evaluation criteria.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... PASARR program must use at least the evaluative criteria of § 483.130 (if one or both determinations can... of all data concerning the individual. (g) Preexisting data. Evaluators may use relevant evaluative... determinations, findings must be issued in the form of a written evaluative report which— (1) Identifies the name...

  19. 42 CFR 483.128 - PASARR evaluation criteria.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... PASARR program must use at least the evaluative criteria of § 483.130 (if one or both determinations can... of all data concerning the individual. (g) Preexisting data. Evaluators may use relevant evaluative... determinations, findings must be issued in the form of a written evaluative report which— (1) Identifies the name...

  20. 42 CFR 483.128 - PASARR evaluation criteria.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... PASARR program must use at least the evaluative criteria of § 483.130 (if one or both determinations can... of all data concerning the individual. (g) Preexisting data. Evaluators may use relevant evaluative... determinations, findings must be issued in the form of a written evaluative report which— (1) Identifies the name...

  1. A Model for the Evaluation of Educational Products.

    ERIC Educational Resources Information Center

    Bertram, Charles L.

    A model for the evaluation of educational products based on experience with development of three such products is described. The purpose of the evaluation model is to indicate the flow of evaluation activity as products undergo development. Evaluation is given Stufflebeam's definition as the process of delineating, obtaining, and providing useful…

  2. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the collection, evaluation, and use of the types of quality evaluation data specified in PGI 246.470-2. [71 FR... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Quality evaluation data...

  3. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the collection, evaluation, and use of the types of quality evaluation data specified in PGI 246.470-2. [71 FR... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Quality evaluation data...

  4. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the collection, evaluation, and use of the types of quality evaluation data specified in PGI 246.470-2. [71 FR... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Quality evaluation data...

  5. 48 CFR 246.470-2 - Quality evaluation data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 246.470-2 Quality evaluation data. The contract administration office shall establish a system for the collection, evaluation, and use of the types of quality evaluation data specified in PGI 246.470-2. [71 FR... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Quality evaluation data...

  6. Using Curriculum-Based Measurements for Program Evaluation: Expanding Roles for School Psychologists

    ERIC Educational Resources Information Center

    Tusing, Mary E.; Breikjern, Nicholle A.

    2017-01-01

    Educators increasingly need to evaluate schoolwide reform efforts; however, complex program evaluations often are not feasible in schools. Through a case example, we provide a heuristic for program evaluation that is easily replicated in schools. Criterion-referenced interpretations of schoolwide screening data were used to evaluate outcomes…

  7. Developing and Implementing a Counselor Evaluation Program.

    ERIC Educational Resources Information Center

    Bell, Priscilla J.; Acker, Kathleen E.

    In the past several years, Tacoma Community College (TCC) has devoted increasing attention to evaluating faculty and staff performance. In recognition of the benefits of a growth-oriented evaluation process over a summative evaluation, the counselors and the Dean for Student Services at TCC developed a comprehensive evaluation system for…

  8. 48 CFR 313.106-2 - Evaluation of quotations or offers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Evaluation of quotations... Evaluation of quotations or offers. (b)(5) Technical Evaluation. When conducting a technical evaluation of quotations or proposals received under FAR Part 13, the provisions of 315.305(a)(3) apply. ...

  9. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project evaluation and rating. (a) The Secretary shall evaluate and rate each proposed project as “highly recommended... 23 Highways 1 2010-04-01 2010-04-01 false Project evaluation and rating. 505.11 Section 505.11...

  10. Complexities in the Evaluation of Distance Education and Virtual Schooling.

    ERIC Educational Resources Information Center

    Vrasidas, Charalambos; Zembylas, Michalinos; Chamberlain, Richard

    2003-01-01

    Discusses the issues related to evaluation of distance education and virtual schooling. The evaluation design of a virtual high school project is presented, and goals, stakeholder analysis, evaluator role, data collection, and data analysis are described. The need for evaluation of distance education and the ethical responsibility of the…

  11. Where Local and National Evaluators Meet: Unintended Threats to Ethical Evaluation Practice

    ERIC Educational Resources Information Center

    Rodi, Michael S.; Paget, Kathleen D.

    2007-01-01

    The ethical work of program evaluators is based on a covenant of honesty and transparency among stakeholders. Yet even under the most favorable evaluation conditions, threats to ethical standards exist and muddle that covenant. Unfortunately, ethical issues associated with different evaluation structures and contracting arrangements have received…

  12. Transition of genomic evaluation from a research project to a production system

    USDA-ARS?s Scientific Manuscript database

    Genomic data began to be included in official USDA genetic evaluations of dairy cattle in January 2009. Numerous changes to the evaluation system were made to enable efficient management of genomic information, to incorporate it in official evaluations, and to distribute evaluations. Artificial-inse...

  13. Using Evaluation and Research Theory to Improve Programs in Applied Settings: An Interview with Thomas D. Cook.

    ERIC Educational Resources Information Center

    Buescher, Thomas M.

    1986-01-01

    An interview with T. Cook, author of works on the use of research and evaluation theory and design, touches on such topics as practical evaluation, planning programs with evaluation or research design, and evaluation of programs for gifted students. (CL)

  14. The Evaluator's Perspective: Evaluating the State Capacity Building Program.

    ERIC Educational Resources Information Center

    Madey, Doren L.

    A historical antagonism between the advocates of quantitative evaluation methods and the proponents of qualitative evaluation methods has stymied the recognition of the value to be gained by utilizing both methodologies in the same study. The integration of quantitative and qualitative methods within a single evaluation has synergistic effects in…

  15. Culturally Responsive Evaluation Meets Systems-Oriented Evaluation

    ERIC Educational Resources Information Center

    Thomas, Veronica G.; Parsons, Beverly A.

    2017-01-01

    The authors of this article each bring a different theoretical background to their evaluation practice. The first author has a background of attention to culturally responsive evaluation (CRE), while the second author has a background of attention to systems theories and their application to evaluation. Both have had their own evolution of…

  16. The Use of Collaborative Midterm Student Evaluations to Provide Actionable Results

    ERIC Educational Resources Information Center

    Veeck, Ann; O'Reilly, Kelley; MacMillan, Amy; Yu, Hongyan

    2016-01-01

    Midterm student evaluations have been shown to be beneficial for providing formative feedback for course improvement. With the purpose of improving instruction in marketing courses, this research introduces and evaluates a novel form of midterm student evaluation of teaching: the online collaborative evaluation. Working in small teams, students…

  17. Symposium: Perspectives on Formative Evaluation of Children's Television Programs.

    ERIC Educational Resources Information Center

    1977

    Evaluators of television programing and representatives of funding agencies discussed the impact of the perceptions of funding agencies on the evaluation of children's television. Participants also examined the interplay between the objectives of the television series and the evaluation, the relationship between production and evaluation, and the…

  18. Educational Evaluation: Key Characteristics. ACER Research Series No. 102.

    ERIC Educational Resources Information Center

    Maling-Keepes, Jillian

    A set of 13 key characteristics is presented as a framework for educational evaluation studies: (1) program's stage of development when evaluator is appointed; (2) program's openness to revision; (3) program uniformity from site to site; (4) specificity of program objectives; (5) evaluator's independence; (6) evaluator's orientation to value…

  19. Will They Like Me? Adolescents' Emotional Responses to Peer Evaluation

    ERIC Educational Resources Information Center

    Guyer, Amanda E.; Caouette, Justin D.; Lee, Clinton C.; Ruiz, Sarah K.

    2014-01-01

    Relative to children and adults, adolescents are highly focused on being evaluated by peers. This increased attention to peer evaluation has implications for emotion regulation in adolescence, but little is known about the characteristics of the evaluatee and evaluator that influence emotional reactions to evaluative outcomes. The present study…

  20. Evaluation Methods of The Text Entities

    ERIC Educational Resources Information Center

    Popa, Marius

    2006-01-01

    The paper highlights some evaluation methods to assess the quality characteristics of the text entities. The main concepts used in building and evaluation processes of the text entities are presented. Also, some aggregated metrics for orthogonality measurements are presented. The evaluation process for automatic evaluation of the text entities is…

  1. Which Features of Spanish Learners' Pronunciation Most Impact Listener Evaluations?

    ERIC Educational Resources Information Center

    McBride, Kara

    2015-01-01

    This study explores which features of Spanish as a foreign language (SFL) pronunciation most impact raters' evaluations. Native Spanish speakers (NSSs) from regions with different pronunciation norms were polled: 147 evaluators from northern Mexico and 99 evaluators from central Argentina. These evaluations were contrasted with ratings from…

  2. 24 CFR 401.450 - Owner evaluation of physical condition.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Owner evaluation of physical... PROGRAM (MARK-TO-MARKET) Restructuring Plan § 401.450 Owner evaluation of physical condition. (a) Initial evaluation. The owner must evaluate the physical condition of the project and provide the following...

  3. American Evaluation Association: Guiding Principles for Evaluators

    ERIC Educational Resources Information Center

    American Journal of Evaluation, 2009

    2009-01-01

    The American Evaluation Association (AEA) strives to promote ethical practice in the evaluation of programs, products, personnel, and policy. This article presents the list of principles which AEA developed to guide evaluators in their professional practice. These principles are: (1) Systematic Inquiry; (2) Competence; (3) Integrity/Honesty; (4)…

  4. Modification and Adaptation of the Program Evaluation Standards in Saudi Arabia

    ERIC Educational Resources Information Center

    Alyami, Mohammed

    2013-01-01

    The Joint Committee on Standards for Educational Evaluation's Program Evaluation Standards is probably the most recognized and applied set of evaluation standards globally. The most recent edition of The Program Evaluation Standards includes five categories and 30 standards. The five categories are Utility, Feasibility, Propriety, Accuracy, and…

  5. Integrating Participatory Elements into an Effectiveness Evaluation

    ERIC Educational Resources Information Center

    Wallace, Tanner LeBaron

    2008-01-01

    This article describes an effectiveness evaluation of an intensive case management intervention coordinated by a non-profit organization in a midsize Midwest City. As an effectiveness evaluation, the primary evaluation question was causal in nature; the key task of the evaluative study was to establish and probe connections between the…

  6. 38 CFR 21.8030 - Requirement for evaluation of child.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... evaluation of child. 21.8030 Section 21.8030 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... Certain Children of Vietnam Veterans and Veterans with Covered Service in Korea-Spina Bifida and Covered Birth Defects Evaluation § 21.8030 Requirement for evaluation of child. (a) Children to be evaluated...

  7. 38 CFR 21.8030 - Requirement for evaluation of child.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... evaluation of child. 21.8030 Section 21.8030 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... Certain Children of Vietnam Veterans and Veterans with Covered Service in Korea-Spina Bifida and Covered Birth Defects Evaluation § 21.8030 Requirement for evaluation of child. (a) Children to be evaluated...

  8. 38 CFR 21.8030 - Requirement for evaluation of child.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... evaluation of child. 21.8030 Section 21.8030 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... Certain Children of Vietnam Veterans and Veterans with Covered Service in Korea-Spina Bifida and Covered Birth Defects Evaluation § 21.8030 Requirement for evaluation of child. (a) Children to be evaluated...

  9. 38 CFR 21.8030 - Requirement for evaluation of child.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... evaluation of child. 21.8030 Section 21.8030 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... Certain Children of Vietnam Veterans and Veterans with Covered Service in Korea-Spina Bifida and Covered Birth Defects Evaluation § 21.8030 Requirement for evaluation of child. (a) Children to be evaluated...

  10. 24 CFR 401.450 - Owner evaluation of physical condition.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Owner evaluation of physical... PROGRAM (MARK-TO-MARKET) Restructuring Plan § 401.450 Owner evaluation of physical condition. (a) Initial evaluation. The owner must evaluate the physical condition of the project and provide the following...

  11. 24 CFR 401.450 - Owner evaluation of physical condition.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Owner evaluation of physical... PROGRAM (MARK-TO-MARKET) Restructuring Plan § 401.450 Owner evaluation of physical condition. (a) Initial evaluation. The owner must evaluate the physical condition of the project and provide the following...

  12. 24 CFR 401.450 - Owner evaluation of physical condition.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Owner evaluation of physical... PROGRAM (MARK-TO-MARKET) Restructuring Plan § 401.450 Owner evaluation of physical condition. (a) Initial evaluation. The owner must evaluate the physical condition of the project and provide the following...

  13. 24 CFR 401.450 - Owner evaluation of physical condition.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Owner evaluation of physical... PROGRAM (MARK-TO-MARKET) Restructuring Plan § 401.450 Owner evaluation of physical condition. (a) Initial evaluation. The owner must evaluate the physical condition of the project and provide the following...

  14. Evaluation Strategies in Financial Education: Evaluation with Imperfect Instruments

    ERIC Educational Resources Information Center

    Robinson, Lauren; Dudensing, Rebekka; Granovsky, Nancy L.

    2016-01-01

    Program evaluation often suffers due to time constraints, imperfect instruments, incomplete data, and the need to report standardized metrics. This article about the evaluation process for the Wi$eUp financial education program showcases the difficulties inherent in evaluation and suggests best practices for assessing program effectiveness. We…

  15. 42 CFR 491.11 - Program evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Program evaluation. 491.11 Section 491.11 Public... Certification; and FQHCs Conditions for Coverage § 491.11 Program evaluation. (a) The clinic or center carries out, or arranges for, an annual evaluation of its total program. (b) The evaluation includes review of...

  16. 20 CFR 365.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Self-evaluation. 365.110 Section 365.110... § 365.110 Self-evaluation. (a) The agency shall, by December 27, 1989, evaluate its current policies and... self-evaluation process by submitting comments (both oral and written). (c) The agency shall, until at...

  17. Students Evaluation of Faculty

    ERIC Educational Resources Information Center

    Thawabieh, Ahmad M.

    2017-01-01

    This study aimed to investigate how students evaluate their faculty and the effect of gender, expected grade, and college on students' evaluation. The study sample consisted of 5291 students from Tafila Technical University Faculty evaluation scale was used to collect data. The results indicated that student evaluation of faculty was high (mean =…

  18. 22 CFR 711.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Self-evaluation. 711.110 Section 711.110 Foreign... CORPORATION § 711.110 Self-evaluation. (a) The agency shall, by September 6, 1989, evaluate its current... participate in the self-evaluation process by submitting comments (both oral and written). (c) The agency...

  19. Increasing the Value of Evaluation to Philanthropic Foundations

    ERIC Educational Resources Information Center

    Greenwald, Howard P.

    2013-01-01

    This article synthesizes interview data from evaluation directors and top executives of philanthropic foundations on how evaluation might better advance their missions. In key informant interviews, respondents commented on the purposes of evaluation from the foundation's perspective, challenges to effective evaluation, and the means by which…

  20. 21 CFR 900.5 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Evaluation. 900.5 Section 900.5 Food and Drugs... STANDARDS ACT MAMMOGRAPHY Accreditation § 900.5 Evaluation. FDA shall evaluate annually the performance of each accreditation body. Such evaluation shall include an assessment of the reports of FDA or State...

  1. 21 CFR 900.23 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Evaluation. 900.23 Section 900.23 Food and Drugs... STANDARDS ACT MAMMOGRAPHY States as Certifiers § 900.23 Evaluation. FDA shall evaluate annually the performance of each certification agency. The evaluation shall include the use of performance indicators that...

  2. 18 CFR 1313.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Self-evaluation. 1313... VALLEY AUTHORITY § 1313.110 Self-evaluation. (a) The agency shall, by August 24, 1987, evaluate its... participate in the self-evaluation process by submitting comments (both oral and written). (c) The agency...

  3. Complicity Revisited: Balancing Stakeholder Input and Roles in Evaluation Use

    ERIC Educational Resources Information Center

    Sturges, Keith M.

    2015-01-01

    Drawing on a qualitative study of an educational reform and its external evaluation, I describe how a well-intentioned but poorly conceptualized evaluation helped perpetuate asymmetries in the generation and use of evaluation findings. This article explores this project's failure to clarify evaluator roles, identify intended users and expected…

  4. 49 CFR 807.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Self-evaluation. 807.110 Section 807.110... TRANSPORTATION SAFETY BOARD § 807.110 Self-evaluation. (a) The agency shall, by April 9, 1987, evaluate its... participate in the self-evaluation process by submitting comments (both oral and written). (c) The agency...

  5. 29 CFR 100.510 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 2 2010-07-01 2010-07-01 false Self-evaluation. 100.510 Section 100.510 Labor Regulations... § 100.510 Self-evaluation. (a) The agency shall, by September 6, 1989, evaluate its current policies and... participate in the self-evaluation process by submitting comments (both oral and written). (c) The agency...

  6. 34 CFR 75.592 - Federal evaluation-satisfying requirement for grantee evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Federal evaluation-satisfying requirement for grantee evaluation. 75.592 Section 75.592 Education Office of the Secretary, Department of Education DIRECT GRANT PROGRAMS What Conditions Must Be Met by a Grantee? Evaluation § 75.592 Federal evaluation—satisfying...

  7. Social Influences on Creativity: Evaluation, Coaction, and Surveillance.

    ERIC Educational Resources Information Center

    Amabile, Teresa M.; And Others

    Two experiments examined the effects of evaluation expectation and the presence of others on creativity in undergraduate students. In both, some Ss expected that their work would be evaluated by experts, while others expected no evaluation. Evaluation expectation was crossed, in each experiment, with the presence of others. In the first…

  8. Training Evaluation: An Analysis of the Stakeholders' Evaluation Needs

    ERIC Educational Resources Information Center

    Guerci, Marco; Vinante, Marco

    2011-01-01

    Purpose: In recent years, the literature on program evaluation has examined multi-stakeholder evaluation, but training evaluation models and practices have not generally taken this problem into account. The aim of this paper is to fill this gap. Design/methodology/approach: This study identifies intersections between methodologies and approaches…

  9. Institution Building and Evaluation.

    ERIC Educational Resources Information Center

    Wedemeyer, Charles A.

    Institutional modeling and program evaluation in relation to a correspondence program are discussed. The evaluation process is first considered from the viewpoint that it is an add-on activity, which is largely summative, and is the least desirable type of evaluation. Formative evaluation is next considered as a part of the process of institution…

  10. Beyond Evaluation: A Model for Cooperative Evaluation of Internet Resources.

    ERIC Educational Resources Information Center

    Kirkwood, Hal P., Jr.

    1998-01-01

    Presents a status report on Web site evaluation efforts, listing dead, merged, new review, Yahoo! wannabes, subject-specific review, former librarian-managed, and librarian-managed review sites; discusses how sites are evaluated; describes and demonstrates (reviewing company directories) the Marr/Kirkwood evaluation model; and provides an…

  11. Evaluating Educational Programs. ERIC Digest Series Number EA 54.

    ERIC Educational Resources Information Center

    Beswick, Richard

    In this digest, readers are introduced to the scope of instructional program evaluation and evaluators' changing roles in school districts. A program evaluation measures outcomes based on student-attainment goals, implementation levels, and external factors such as budgetary restraints and community support. Instructional program evaluation may be…

  12. ORE's GENeric Evaluation SYStem: GENESYS 1988-89.

    ERIC Educational Resources Information Center

    Baenen, Nancy; And Others

    GENESYS--GENeric Evaluation SYStem--is a method of streamlining data collection and evaluation through the use of computer technology. GENESYS has allowed the Office of Research and Evaluation (ORE) of the Austin (Texas) Independent School District to evaluate a multitude of contrasting programs with limited resources. By standardizing methods and…

  13. 20 CFR 220.101 - Evaluation of mental impairments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... ACT DETERMINING DISABILITY Evaluation of Disability § 220.101 Evaluation of mental impairments. (a) General. The steps outlined in § 220.100 apply to the evaluation of physical and mental impairments. In... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Evaluation of mental impairments. 220.101...

  14. Library Programs. Evaluating Federally Funded Public Library Programs.

    ERIC Educational Resources Information Center

    Office of Educational Research and Improvement (ED), Washington, DC.

    Following an introduction by Betty J. Turock, nine reports examine key issues in library evaluation: (1) "Output Measures and the Evaluation Process" (Nancy A. Van House) describes measurement as a concept to be understood in the larger context of planning and evaluation; (2) "Adapting Output Measures to Program Evaluation"…

  15. Learning on the Job: Teacher Evaluation Can Foster Real Growth

    ERIC Educational Resources Information Center

    Ritter, Gary W.; Barnett, Joshua H.

    2016-01-01

    Since 2010, there has been much policy activity on teacher evaluation. Many education policy makers have embraced the idea that improved teacher evaluation can cultivate genuine improvements in the teaching force and improved student outcomes. Can genuine evaluation actually enhance the effectiveness of those evaluated? Using structured interviews…

  16. Assessing the Subsequent Effect of a Formative Evaluation on a Program.

    ERIC Educational Resources Information Center

    Brown, J. Lynne; Kiernan, Nancy Ellen

    2001-01-01

    Conducted a formative evaluation of an osteoporosis prevention health education program using several methods, including questionnaires completed by 256 women, and then compared formative evaluation results to those of a summative evaluation focusing on the same target group. Results show the usefulness of formative evaluation for strengthening…

  17. Contracting for Independent Evaluation: Approaches to an Inherent Tension

    ERIC Educational Resources Information Center

    Klerman, Jacob Alex

    2010-01-01

    There has recently been discussion of whether independent contract evaluation is possible. This article acknowledges the inherent tension in contract evaluation and in response suggests a range of constructive approaches to improving the independence of contract evaluation. In particular, a clear separation between the official evaluation report…

  18. Annual Report. Technical Reports. Evaluation Productivity Project.

    ERIC Educational Resources Information Center

    Alkin, Marvin; And Others

    After outlining the 1984 activities and results of the Center for the Study of Evaluation's (CSE's) Evaluation Productivity Project, this monograph presents three reports. The first, "The Administrator's Role in Evaluation Use," by James Burry, Marvin C. Alkin, and Joan A. Ruskus, describes the factors influencing an evaluation's use…

  19. Maximizing the Impact of Program Evaluation: A Discrepancy-Based Process for Educational Program Evaluation.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    This paper describes a formative/summative process for educational program evaluation, which is appropriate for higher education programs and is based on M. Provus' Discrepancy Evaluation Model and the principles of instructional design. The Discrepancy Based Methodology for Educational Program Evaluation facilitates systematic and detailed…

  20. An Analysis of State and Local Alignment of Teacher Evaluation in Maryland

    ERIC Educational Resources Information Center

    Peterson, Serene N.

    2014-01-01

    This study explored the components of Maryland's newly-implemented teacher evaluation framework and compared state requirements with evaluations to three local school systems' evaluation procedures. The study sought to investigate the relationship between three evaluation protocols in comparison to the state requirements. Three local school…

  1. 34 CFR 75.592 - Federal evaluation-satisfying requirement for grantee evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 1 2014-07-01 2014-07-01 false Federal evaluation-satisfying requirement for grantee evaluation. 75.592 Section 75.592 Education Office of the Secretary, Department of Education DIRECT GRANT PROGRAMS What Conditions Must Be Met by a Grantee? Evaluation § 75.592 Federal evaluation—satisfying...

  2. Paired Comparison Evaluations of Managerial Effectiveness by Peers and Supervisors.

    ERIC Educational Resources Information Center

    Siegel, Laurence

    1982-01-01

    Solicited paired comparison evaluations for a group of savings and loan association branch managers. Peer evaluations were obtained from 16 of these managers; supervisory evaluations were obtained from four officers. Interjudge agreement (both within and between groups) was high. Peer-generated evaluations assisted officers in making acceptable…

  3. Quality of Instruction Improved by Evaluation and Consultation of Instructors

    ERIC Educational Resources Information Center

    Rindermann, Heiner; Kohler, Jurgen; Meisenberg, Gerhard

    2007-01-01

    One aim of student evaluation of instruction is the improvement of teaching quality, but there is little evidence that student assessment of instruction alone improves teaching. This study tried to improve the effects of evaluation by combining evaluation with individual counselling in an institutional development approach. Evaluation was…

  4. Secondary Evaluations.

    ERIC Educational Resources Information Center

    Cook, Thomas D.

    Secondary evaluations, in which an investigator takes a body of evaluation data collected by a primary evaluation researcher and examines the data to see if the original conclusions about the program correspond with his own, are discussed. The different kinds of secondary evaluations and the advantages and disadvantages of each are pointed out,…

  5. Evaluation as a Collaborative Activity to Learn Content Knowledge in a Graduate Course

    ERIC Educational Resources Information Center

    Hughes, Bob; Arbogast, Janet; Kafer, Lindsey; Chen, Julianna

    2014-01-01

    Teaching graduate students to conduct evaluations is typically relegated to evaluation methods courses. This approach misses an opportunity for students to collaboratively use evaluation skills to explore content. This article examines a graduate course, Issues in Adult Basic Education, in which students learned evaluation methods concurrently…

  6. 40 CFR 300.410 - Removal site evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 27 2010-07-01 2010-07-01 false Removal site evaluation. 300.410... PLAN Hazardous Substance Response § 300.410 Removal site evaluation. (a) A removal site evaluation... evaluation of a release identified for possible CERCLA response pursuant to § 300.415 shall, as appropriate...

  7. Responsive Meta-Evaluation: A Participatory Approach to Enhancing Evaluation Quality

    ERIC Educational Resources Information Center

    Sturges, Keith M.; Howley, Caitlin

    2017-01-01

    In an era of ever-deepening budget cuts and a concomitant demand for substantiated programs, many organizations have elected to conduct internal program evaluations. Internal evaluations offer advantages (e.g., enhanced evaluator program knowledge and ease of data collection) but may confront important challenges, including credibility threats,…

  8. 40 CFR 60.1740 - What is my schedule for evaluating continuous emission monitoring systems?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... continuous emission monitoring systems? 60.1740 Section 60.1740 Protection of Environment ENVIRONMENTAL... evaluating continuous emission monitoring systems? (a) Conduct annual evaluations of your continuous emission monitoring systems no more than 13 months after the previous evaluation was conducted. (b) Evaluate your...

  9. 40 CFR 62.15195 - What is my schedule for evaluating continuous emission monitoring systems?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... continuous emission monitoring systems? 62.15195 Section 62.15195 Protection of Environment ENVIRONMENTAL... evaluating continuous emission monitoring systems? (a) Conduct annual evaluations of your continuous emission monitoring systems no more than 13 months after the previous evaluation was conducted. (b) Evaluate your...

  10. 40 CFR 62.15195 - What is my schedule for evaluating continuous emission monitoring systems?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... continuous emission monitoring systems? 62.15195 Section 62.15195 Protection of Environment ENVIRONMENTAL... evaluating continuous emission monitoring systems? (a) Conduct annual evaluations of your continuous emission monitoring systems no more than 13 months after the previous evaluation was conducted. (b) Evaluate your...

  11. The Law of Teacher Evaluation. NOLPE Monograph/Book Series, No. 42.

    ERIC Educational Resources Information Center

    Rossow, Lawrence F.; Parkinson, Jerry

    Litigation in the area of teacher evaluation has developed around issues concerning the processes and criteria used by school districts in conducting evaluations. Following an introduction explaining basic concepts, chapter 2 discusses the appropriate content of teacher evaluation, examining formal adoption of evaluation policies, compliance with…

  12. Current and Developing Conceptions of Use: Evaluation Use TIG Survey Results.

    ERIC Educational Resources Information Center

    Preskill, Hallie; Caracelli, Valerie

    1997-01-01

    A survey was sent to members of the Evaluation Use Topical Interest Group (TIG) to determine their perceptions about and experiences with evaluation use. Responses from 282 members show agreement on the major purposes of evaluation and an increased use of performance-results oriented and formative evaluations. (SLD)

  13. The case for applying an early-lifecycle technology evaluation methodology to comparative evaluation of requirements engineering research

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.

    2003-01-01

    The premise of this paper is taht there is a useful analogy between evaluation of proposed problem solutions and evaluation of requirements engineering research itself. Both of these application areas face the challenges of evaluation early in the lifecycle, of the need to consider a wide variety of factors, and of the need to combine inputs from multiple stakeholders in making thse evaluation and subsequent decisions.

  14. A participatory evaluation model for Healthier Communities: developing indicators for New Mexico.

    PubMed Central

    Wallerstein, N

    2000-01-01

    Participatory evaluation models that invite community coalitions to take an active role in developing evaluations of their programs are a natural fit with Healthy Communities initiatives. The author describes the development of a participatory evaluation model for New Mexico's Healthier Communities program. She describes evaluation principles, research questions, and baseline findings. The evaluation model shows the links between process, community-level system impacts, and population health changes. PMID:10968754

  15. A KARAOKE System Singing Evaluation Method that More Closely Matches Human Evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo

    KARAOKE is a popular amusement for old and young. Many KARAOKE machines have singing evaluation function. However, it is often said that the scores given by KARAOKE machines do not match human evaluation. In this paper a KARAOKE scoring method strongly correlated with human evaluation is proposed. This paper proposes a way to evaluate songs based on the distance between singing pitch and musical scale, employing a vibrato extraction method based on template matching of spectrum. The results show that correlation coefficients between scores given by the proposed system and human evaluation are -0.76∼-0.89.

  16. Disability Policy Evaluation: Combining Logic Models and Systems Thinking.

    PubMed

    Claes, Claudia; Ferket, Neelke; Vandevelde, Stijn; Verlet, Dries; De Maeyer, Jessica

    2017-07-01

    Policy evaluation focuses on the assessment of policy-related personal, family, and societal changes or benefits that follow as a result of the interventions, services, and supports provided to those persons to whom the policy is directed. This article describes a systematic approach to policy evaluation based on an evaluation framework and an evaluation process that combine the use of logic models and systems thinking. The article also includes an example of how the framework and process have recently been used in policy development and evaluation in Flanders (Belgium), as well as four policy evaluation guidelines based on relevant published literature.

  17. Shuttle orbiter Ku-band radar/communications system design evaluation

    NASA Technical Reports Server (NTRS)

    Dodds, J.; Holmes, J.; Huth, G. K.; Iwasaki, R.; Maronde, R.; Polydoros, A.; Weber, C.; Broad, P.

    1980-01-01

    Tasks performed in an examination and critique of a Ku-band radar communications system for the shuttle orbiter are reported. Topics cover: (1) Ku-band high gain antenna/widebeam horn design evaluation; (2) evaluation of the Ku-band SPA and EA-1 LRU software; (3) system test evaluation; (4) critical design review and development test evaluation; (5) Ku-band bent pipe channel performance evaluation; (6) Ku-band LRU interchangeability analysis; and (7) deliverable test equipment evaluation. Where discrepancies were found, modifications and improvements to the Ku-band system and the associated test procedures are suggested.

  18. Motivational Differences in Seeking Out Evaluative Categorization Information.

    PubMed

    Smallman, Rachel; Becker, Brittney

    2017-07-01

    Previous research shows that people draw finer evaluative distinctions when rating liked versus disliked objects (e.g., wanting a 5-point scale to evaluate liked cuisines and a 3-point scale to rate disliked cuisines). Known as the preference-categorization effect, this pattern may exist not only in how individuals form evaluative distinctions but also in how individuals seek out evaluative information. The current research presents three experiments that examine motivational differences in evaluative information seeking (rating scales and attributes). Experiment 1 found that freedom of choice (the ability to avoid undesirable stimuli) and sensitivity to punishment (as measured by the Behavior Inhibition System/Behavioral Approach System [BIS/BAS] scale) influenced preferences for desirable and undesirable evaluative information in a health-related decision. Experiment 2 examined choice optimization, finding that maximizers prefer finer evaluative information for both liked and disliked options in a consumer task. Experiment 3 found that this pattern generalizes to another type of evaluative categorization, attributes.

  19. You got a problem with that? Exploring evaluators' disagreements about ethics.

    PubMed

    Morris, M; Jacobs, L R

    2000-08-01

    A random sample of American Evaluation Association (AEA) members were surveyed for their reactions to three case scenarios--informed consent, impartial reporting, and stakeholder involvement--in which an evaluator acts in a way that could be deemed ethically problematic. Significant disagreement among respondents was found for each of the scenarios, in terms of respondents' views of whether the evaluator had behaved unethically. Respondents' explanations of their judgments support the notion that general guidelines for professional behavior (such as AEA's Guiding Principles for Evaluators) can encompass sharply conflicting interpretations of how evaluators should behave in specific situations. Respondents employed in private business/consulting were less likely than those in other settings to believe that the scenarios portrayed unethical behavior by the evaluator, a finding that underscores the importance of taking contextual variables into account when analyzing evaluators' ethical perceptions. The need for increased dialogue among evaluators who represent varied perspectives on ethical issues is addressed.

  20. A Comparison of Participant and Practitioner Beliefs About Evaluation

    PubMed Central

    Whitehall, Anna K.; Hill, Laura G.; Koehler, Christian R.

    2014-01-01

    The move to build capacity for internal evaluation is a common organizational theme in social service delivery, and in many settings, the evaluator is also the practitioner who delivers the service. The goal of the present study was to extend our limited knowledge of practitioner evaluation. Specifically, the authors examined practitioner concerns about administering pretest and posttest evaluations within the context of a multisite 7-week family strengthening program and compared those concerns with self-reported attitudes of the parents who completed evaluations. The authors found that program participants (n = 105) were significantly less likely to find the evaluation process intrusive, and more likely to hold positive beliefs about the evaluation process, than practitioners (n = 140) expected. Results of the study may address a potential barrier to effective practitioner evaluation—the belief that having to administer evaluations interferes with establishing a good relationship with program participants. PMID:25328379

  1. Using evaluation methods to guide the development of a tobacco-use prevention curriculum for youth: a case study.

    PubMed

    Bridge, P D; Gallagher, R E; Berry-Bobovski, L C

    2000-01-01

    Fundamental to the development of educational programs and curricula is the evaluation of processes and outcomes. Unfortunately, many otherwise well-designed programs do not incorporate stringent evaluation methods and are limited in measuring program development and effectiveness. Using an advertising lesson in a school-based tobacco-use prevention curriculum as a case study, the authors examine the role of evaluation in the development, implementation, and enhancement of the curricular lesson. A four-phase formative and summative evaluation design was developed to divide the program-evaluation continuum into a structured process that would aid in the management of the evaluation, as well as assess curricular components. Formative and summative evaluation can provide important guidance in the development, implementation, and enhancement of educational curricula. Evaluation strategies identified unexpected barriers and allowed the project team to make necessary "time-relevant" curricular adjustments during each stage of the process.

  2. Let's get technical: Enhancing program evaluation through the use and integration of internet and mobile technologies.

    PubMed

    Materia, Frank T; Miller, Elizabeth A; Runion, Megan C; Chesnut, Ryan P; Irvin, Jamie B; Richardson, Cameron B; Perkins, Daniel F

    2016-06-01

    Program evaluation has become increasingly important, and information on program performance often drives funding decisions. Technology use and integration can help ease the burdens associated with program evaluation by reducing the resources needed (e.g., time, money, staff) and increasing evaluation efficiency. This paper reviews how program evaluators, across disciplines, can apply internet and mobile technologies to key aspects of program evaluation, which consist of participant registration, participant tracking and retention, process evaluation (e.g., fidelity, assignment completion), and outcome evaluation (e.g., behavior change, knowledge gain). In addition, the paper focuses on the ease of use, relative cost, and fit with populations. An examination on how these tools can be integrated to enhance data collection and program evaluation is discussed. Important limitations of and considerations for technology integration, including the level of technical skill, cost needed to integrate various technologies, data management strategies, and ethical considerations, are highlighted. Lastly, a case study of technology use in an evaluation conducted by the Clearinghouse for Military Family Readiness at Penn State is presented and illustrates how technology integration can enhance program evaluation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. An evaluation capacity building toolkit for principal investigators of undergraduate research experiences: A demonstration of transforming theory into practice.

    PubMed

    Rorrer, Audrey S

    2016-04-01

    This paper describes the approach and process undertaken to develop evaluation capacity among the leaders of a federally funded undergraduate research program. An evaluation toolkit was developed for Computer and Information Sciences and Engineering(1) Research Experiences for Undergraduates(2) (CISE REU) programs to address the ongoing need for evaluation capacity among principal investigators who manage program evaluation. The toolkit was the result of collaboration within the CISE REU community with the purpose being to provide targeted instructional resources and tools for quality program evaluation. Challenges were to balance the desire for standardized assessment with the responsibility to account for individual program contexts. Toolkit contents included instructional materials about evaluation practice, a standardized applicant management tool, and a modulated outcomes measure. Resulting benefits from toolkit deployment were having cost effective, sustainable evaluation tools, a community evaluation forum, and aggregate measurement of key program outcomes for the national program. Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust. Results from project measures are presented along with a discussion of guidelines for facilitating evaluation capacity building that will serve a variety of contexts. Copyright © 2016. Published by Elsevier Ltd.

  4. Static-99R reporting practices in sexually violent predator cases: Does norm selection reflect adversarial allegiance?

    PubMed

    Chevalier, Caroline S; Boccaccini, Marcus T; Murrie, Daniel C; Varela, Jorge G

    2015-06-01

    We surveyed experts (N = 109) who conduct sexually violent predator (SVP) evaluations to obtain information about their Static-99R score reporting and interpretation practices. Although most evaluators reported providing at least 1 normative sample recidivism rate estimate, there were few other areas of consensus. Instead, reporting practices differed depending on the side for which evaluators typically performed evaluations. Defense evaluators were more likely to endorse reporting practices that convey the lowest possible level of risk (e.g., routine sample recidivism rates, 5-year recidivism rates) and the highest level of uncertainty (e.g., confidence intervals, classification accuracy), whereas prosecution evaluators were more likely to endorse practices suggesting the highest possible level of risk (e.g., high risk/need sample recidivism rates, 10-year recidivism rates). Reporting practices from state-agency evaluators tended to be more consistent with those of prosecution evaluators than defense evaluators, although state-agency evaluators were more likely than other evaluators to report that it was at least somewhat difficult to choose an appropriate normative comparison group. Overall, findings provide evidence for adversarial allegiance in Static-99R score reporting and interpretation practices. (c) 2015 APA, all rights reserved).

  5. Judgment under uncertainty; a probabilistic evaluation framework for decision-making about sanitation systems in low-income countries.

    PubMed

    Malekpour, Shirin; Langeveld, Jeroen; Letema, Sammy; Clemens, François; van Lier, Jules B

    2013-03-30

    This paper introduces the probabilistic evaluation framework, to enable transparent and objective decision-making in technology selection for sanitation solutions in low-income countries. The probabilistic framework recognizes the often poor quality of the available data for evaluations. Within this framework, the evaluations will be done based on the probabilities that the expected outcomes occur in practice, considering the uncertainties in evaluation parameters. Consequently, the outcome of evaluations will not be single point estimates; but there exists a range of possible outcomes. A first trial application of this framework for evaluation of sanitation options in the Nyalenda settlement in Kisumu, Kenya, showed how the range of values that an evaluation parameter may obtain in practice would influence the evaluation outcomes. In addition, as the probabilistic evaluation requires various site-specific data, sensitivity analysis was performed to determine the influence of each data set quality on the evaluation outcomes. Based on that, data collection activities could be (re)directed, in a trade-off between the required investments in those activities and the resolution of the decisions that are to be made. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. [Progress and prospects on evaluation of ecological restoration: a review of the 5th World Conference on Ecological Restoration].

    PubMed

    Ding, Jing-Yi; Zhao, Wen-Wu

    2014-09-01

    The 5th World Conference on Ecological Restoration was held in Madison, Wisconsin, USA on October 6-11, 2013. About 1200 delegates from more than 50 countries attended the conference, and discussed the latest developments in different thematic areas of ecological restoration. Discussions on evaluation of ecological restoration were mainly from three aspects: The construction for evaluation indicator system of ecological restoration; the evaluation methods of ecological restoration; monitoring and dynamic evaluation of ecological restoration. The meeting stressed the importance of evaluation in the process of ecological restoration and concerned the challenges in evaluation of ecological restoration. The conference had the following enlightenments for China' s research on evaluation of ecological restoration: 1) Strengthening the construction of comprehensive evaluation indicators system and focusing on the multi-participation in the evaluation process. 2) Paying more attentions on scale effect and scale transformation in the evaluation process of ecological restoration. 3) Expanding the application of 3S technology in assessing the success of ecological restoration and promoting the dynamic monitoring of ecological restoration. 4) Carrying out international exchanges and cooperation actively, and promoting China's international influence in ecological restoration research.

  7. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit).

    PubMed

    Yusof, Maryati Mohd; Kuljis, Jasna; Papazafeiropoulou, Anastasia; Stergioulas, Lampros K

    2008-06-01

    The realization of Health Information Systems (HIS) requires rigorous evaluation that addresses technology, human and organization issues. Our review indicates that current evaluation methods evaluate different aspects of HIS and they can be improved upon. A new evaluation framework, human, organization and technology-fit (HOT-fit) was developed after having conducted a critical appraisal of the findings of existing HIS evaluation studies. HOT-fit builds on previous models of IS evaluation--in particular, the IS Success Model and the IT-Organization Fit Model. This paper introduces the new framework for HIS evaluation that incorporates comprehensive dimensions and measures of HIS and provides a technological, human and organizational fit. Literature review on HIS and IS evaluation studies and pilot testing of developed framework. The framework was used to evaluate a Fundus Imaging System (FIS) of a primary care organization in the UK. The case study was conducted through observation, interview and document analysis. The main findings show that having the right user attitude and skills base together with good leadership, IT-friendly environment and good communication can have positive influence on the system adoption. Comprehensive, specific evaluation factors, dimensions and measures in the new framework (HOT-fit) are applicable in HIS evaluation. The use of such a framework is argued to be useful not only for comprehensive evaluation of the particular FIS system under investigation, but potentially also for any Health Information System in general.

  8. Presenting an Evaluation Model for the Cancer Registry Software.

    PubMed

    Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh

    2017-12-01

    As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.

  9. Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods

    PubMed Central

    2010-01-01

    Background Schools are key settings for health promotion (HP) but the development of suitable approaches for evaluating HP in schools is still a major topic of discussion. This article presents a research protocol of a program developed to evaluate HP. After reviewing HP evaluation issues, the various possible approaches are analyzed and the importance of a realistic evaluation framework and a mixed methods (MM) design are demonstrated. Methods/Design The design is based on a systemic approach to evaluation, taking into account the mechanisms, context and outcomes, as defined in realistic evaluation, adjusted to our own French context using an MM approach. The characteristics of the design are illustrated through the evaluation of a nationwide HP program in French primary schools designed to enhance children's social, emotional and physical health by improving teachers' HP practices and promoting a healthy school environment. An embedded MM design is used in which a qualitative data set plays a supportive, secondary role in a study based primarily on a different quantitative data set. The way the qualitative and quantitative approaches are combined through the entire evaluation framework is detailed. Discussion This study is a contribution towards the development of suitable approaches for evaluating HP programs in schools. The systemic approach of the evaluation carried out in this research is appropriate since it takes account of the limitations of traditional evaluation approaches and considers suggestions made by the HP research community. PMID:20109202

  10. Validity of peer grading using Calibrated Peer Review in a guided-inquiry, conceptual physics course

    NASA Astrophysics Data System (ADS)

    Price, Edward; Goldberg, Fred; Robinson, Steve; McKean, Michael

    2016-12-01

    Constructing and evaluating explanations are important science practices, but in large classes it can be difficult to effectively engage students in these practices and provide feedback. Peer review and grading are scalable instructional approaches that address these concerns, but which raise questions about the validity of the peer grading process. Calibrated Peer Review (CPR) is a web-based system that scaffolds peer evaluation through a "calibration" process where students evaluate sample responses and receive feedback on their evaluations before evaluating their peers. Guided by an activity theory framework, we developed, implemented, and evaluated CPR-based tasks in guided-inquiry, conceptual physics courses for future teachers and general education students. The tasks were developed through iterative testing and revision. Effective tasks had specific and directed prompts and evaluation instructions. Using these tasks, over 350 students at three universities constructed explanations or analyzed physical phenomena, and evaluated their peers' work. By independently assessing students' responses, we evaluated the CPR calibration process and compared students' peer reviews with expert evaluations. On the tasks analyzed, peer scores were equivalent to our independent evaluations. On a written explanation item included on the final exam, students in the courses using CPR outperformed students in similar courses using traditional writing assignments without a peer evaluation element. Our research demonstrates that CPR can be an effective way to explicitly include the science practices of constructing and evaluating explanations into large classes without placing a significant burden on the instructor.

  11. Evaluating participation in water resource management: A review

    NASA Astrophysics Data System (ADS)

    Carr, G.; BlöSchl, G.; Loucks, D. P.

    2012-11-01

    Key documents such as the European Water Framework Directive and the U.S. Clean Water Act state that public and stakeholder participation in water resource management is required. Participation aims to enhance resource management and involve individuals and groups in a democratic way. Evaluation of participatory programs and projects is necessary to assess whether these objectives are being achieved and to identify how participatory programs and projects can be improved. The different methods of evaluation can be classified into three groups: (i) process evaluation assesses the quality of participation process, for example, whether it is legitimate and promotes equal power between participants, (ii) intermediary outcome evaluation assesses the achievement of mainly nontangible outcomes, such as trust and communication, as well as short- to medium-term tangible outcomes, such as agreements and institutional change, and (iii) resource management outcome evaluation assesses the achievement of changes in resource management, such as water quality improvements. Process evaluation forms a major component of the literature but can rarely indicate whether a participation program improves water resource management. Resource management outcome evaluation is challenging because resource changes often emerge beyond the typical period covered by the evaluation and because changes cannot always be clearly related to participation activities. Intermediary outcome evaluation has been given less attention than process evaluation but can identify some real achievements and side benefits that emerge through participation. This review suggests that intermediary outcome evaluation should play a more important role in evaluating participation in water resource management.

  12. CONSORT to community: translation of an RCT to a large-scale community intervention and learnings from evaluation of the upscaled program.

    PubMed

    Moores, Carly Jane; Miller, Jacqueline; Perry, Rebecca Anne; Chan, Lily Lai Hang; Daniels, Lynne Allison; Vidgen, Helen Anna; Magarey, Anthea Margaret

    2017-11-29

    Translation encompasses the continuum from clinical efficacy to widespread adoption within the healthcare service and ultimately routine clinical practice. The Parenting, Eating and Activity for Child Health (PEACH™) program has previously demonstrated clinical effectiveness in the management of child obesity, and has been recently implemented as a large-scale community intervention in Queensland, Australia. This paper aims to describe the translation of the evaluation framework from a randomised controlled trial (RCT) to large-scale community intervention (PEACH™ QLD). Tensions between RCT paradigm and implementation research will be discussed along with lived evaluation challenges, responses to overcome these, and key learnings for future evaluation conducted at scale. The translation of evaluation from PEACH™ RCT to the large-scale community intervention PEACH™ QLD is described. While the CONSORT Statement was used to report findings from two previous RCTs, the REAIM framework was more suitable for the evaluation of upscaled delivery of the PEACH™ program. Evaluation of PEACH™ QLD was undertaken during the project delivery period from 2013 to 2016. Experiential learnings from conducting the evaluation of PEACH™ QLD to the described evaluation framework are presented for the purposes of informing the future evaluation of upscaled programs. Evaluation changes in response to real-time changes in the delivery of the PEACH™ QLD Project were necessary at stages during the project term. Key evaluation challenges encountered included the collection of complete evaluation data from a diverse and geographically dispersed workforce and the systematic collection of process evaluation data in real time to support program changes during the project. Evaluation of large-scale community interventions in the real world is challenging and divergent from RCTs which are rigourously evaluated within a more tightly-controlled clinical research setting. Constructs explored in an RCT are inadequate in describing the enablers and barriers of upscaled community program implementation. Methods for data collection, analysis and reporting also require consideration. We present a number of experiential reflections and suggestions for the successful evaluation of future upscaled community programs which are scarcely reported in the literature. PEACH™ QLD was retrospectively registered with the Australian New Zealand Clinical Trials Registry on 28 February 2017 (ACTRN12617000315314).

  13. Contextual adaptation of the Personnel Evaluation Standards for assessing faculty evaluation systems in developing countries: the case of Iran

    PubMed Central

    Ahmady, Soleiman; Changiz, Tahereh; Brommels, Mats; Gaffney, F Andrew; Thor, Johan; Masiello, Italo

    2009-01-01

    Background Faculty evaluations can identify needs to be addressed in effective development programs. Generic evaluation models exist, but these require adaptation to a particular context of interest. We report on one approach to such adaptation in the context of medical education in Iran, which is integrated into the delivery and management of healthcare services nationwide. Methods Using a triangulation design, interviews with senior faculty leaders were conducted to identify relevant areas for faculty evaluation. We then adapted the published checklist of the Personnel Evaluation Standards to fit the Iranian medical universities' context by considering faculty members' diverse roles. Then the adapted instrument was administered to faculty at twelve medical schools in Iran. Results The interviews revealed poor linkages between existing forms of development and evaluation, imbalance between the faculty work components and evaluated areas, inappropriate feedback and use of information in decision making. The principles of Personnel Evaluation Standards addressed almost all of these concerns and were used to assess the existing faculty evaluation system and also adapted to evaluate the core faculty roles. The survey response rate was 74%. Responses showed that the four principles in all faculty members' roles were met occasionally to frequently. Evaluation of teaching and research had the highest mean scores, while clinical and healthcare services, institutional administration, and self-development had the lowest mean scores. There were statistically significant differences between small medium and large medical schools (p < 0.000). Conclusion The adapted Personnel Evaluation Standards appears to be valid and applicable for monitoring and continuous improvement of a faculty evaluation system in the context of medical universities in Iran. The approach developed here provides a more balanced assessment of multiple faculty roles, including educational, clinical and healthcare services. In order to address identified deficiencies, the evaluation system should recognize, document, and uniformly reward those activities that are vital to the academic mission. Inclusion of personal developmental concerns in the evaluation discussion is essential for evaluation systems. PMID:19400932

  14. OERL: A Tool For Geoscience Education Evaluators

    NASA Astrophysics Data System (ADS)

    Zalles, D. R.

    2002-12-01

    The Online Evaluation Resource Library (OERL) is a Web-based set of resources for improving the evaluation of projects funded by the Directorate for Education and Human Resources (EHR) of the National Science Foundation (NSF). OERL provides prospective project developers and evaluators with material that they can use to design, conduct, document, and review evaluations. OERL helps evaluators tackle the challenges of seeing if a project is meeting its implementation and outcome-related goals. Within OERL is a collection of exemplary plans, instruments, and reports from evaluations of EHR-funded projects in the geosciences and in other areas of science and mathematics. In addition, OERL contains criteria about good evaluation practices, professional development modules about evaluation design and questionnaire development, a dictionary of key evaluation terms, and links to evaluation standards. Scenarios illustrate how the resources can be used or adapted. Currently housed in OERL are 137 instruments, and full or excerpted versions of 38 plans and 60 reports. 143 science and math projects have contributed to the collection so far. OERL's search tool permits the launching of precise searches based on key attributes of resources such as their subject area and the name of the sponsoring university or research institute. OERL's goals are to 1) meet the needs for continuous professional development of evaluators and principal investigators, 2) complement traditional vehicles of learning about evaluation, 3) utilize the affordances of current technologies (e.g., Web-based digital libraries, relational databases, and electronic performance support systems) for improving evaluation practice, 4) provide anytime/anyplace access to update-able resources that support evaluators' needs, and 5) provide a forum by which professionals can interact on evaluation issues and practices. Geoscientists can search the collection of resources from geoscience education projects that have been funded by NSF to carry out curriculum development, teacher education, faculty development, and increased access, retention, and preparation of under-represented student populations in science. Over the next two years, additional plans, instruments, and reports from other projects will be added to the OERL collection. Also to be added are more professional development modules and online coaches for constructing key evaluation documents. The presentation overviews the structure of OERL, describes some of the geoscience projects in the collection, and provides some examples of how its resources can be used and adapted for other geoscience education evaluations.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riemer, R.L.

    The Panel on Basic Nuclear Data Compilations believes that it is important to provide the user with an evaluated nuclear database of the highest quality, dependability, and currency. It is also important that the evaluated nuclear data are easily accessible to the user. In the past the panel concentrated its concern on the cycle time for the publication of A-chain evaluations. However, the panel now recognizes that publication cycle time is no longer the appropriate goal. Sometime in the future, publication of the evaluated A-chains will evolve from the present hard-copy Nuclear Data Sheets on library shelves to purely electronicmore » publication, with the advent of universal access to terminals and the nuclear databases. Therefore, the literature cut-off date in the Evaluated Nuclear Structure Data File (ENSDF) is rapidly becoming the only important measure of the currency of an evaluated A-chain. Also, it has become exceedingly important to ensure that access to the databases is as user-friendly as possible and to enable electronic publication of the evaluated data files. Considerable progress has been made in these areas: use of the on-line systems has almost doubled in the past year, and there has been initial development of tools for electronic evaluation, publication, and dissemination. Currently, the nuclear data effort is in transition between the traditional and future methods of dissemination of the evaluated data. Also, many of the factors that adversely affect the publication cycle time simultaneously affect the currency of the evaluated nuclear database. Therefore, the panel continues to examine factors that can influence cycle time: the number of evaluators, the frequency with which an evaluation can be updated, the review of the evaluation, and the production of the evaluation, which currently exists as a hard-copy issue of Nuclear Data Sheets.« less

  16. The evaluator as technical assistant: A model for systemic reform support

    NASA Astrophysics Data System (ADS)

    Century, Jeanne Rose

    This study explored evaluation of systemic reform. Specifically, it focused on the evaluation of a systemic effort to improve K-8 science, mathematics and technology education. The evaluation was of particular interest because it used both technical assistance and evaluation strategies. Through studying the combination of these roles, this investigation set out to increase understanding of potentially new evaluator roles, distinguish important characteristics of the evaluator/project participant relationship, and identify how these roles and characteristics contribute to effective evaluation of systemic science education reform. This qualitative study used interview, document analysis, and participant observation as methods of data collection. Interviews were conducted with project leaders, project participants, and evaluators and focused on the evaluation strategies and process, the use of the evaluation, and technical assistance. Documents analyzed included transcripts of evaluation team meetings and reports, memoranda and other print materials generated by the project leaders and the evaluators. Data analysis consisted of analytic and interpretive procedures consistent with the qualitative data collected and entailed a combined process of coding transcripts of interviews and meetings, field notes, and other documents; analyzing and organizing findings; writing of reflective and analytic memos; and designing and diagramming conceptual relationships. The data analysis resulted in the development of the Multi-Function Model for Systemic Reform Support. This model organizes systemic reform support into three functions: evaluation, technical assistance, and a third, named here as "systemic perspective." These functions work together to support the project's educational goals as well as a larger goal--building capacity in project participants. This model can now serve as an informed starting point or "blueprint" for strategically supporting systemic reform.

  17. Case study of evaluations that go beyond clinical outcomes to assess quality improvement diabetes programmes using the Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE).

    PubMed

    Paquette-Warren, Jann; Harris, Stewart B; Naqshbandi Hayward, Mariam; Tompkins, Jordan W

    2016-10-01

    Investments in efforts to reduce the burden of diabetes on patients and health care are critical; however, more evaluation is needed to provide evidence that informs and supports future policies and programmes. The newly developed Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE) incorporates the theoretical concepts needed to facilitate the capture of critical information to guide investments, policy and programmatic decision making. The aim of the study is to assess the applicability and value of DEFINE in comprehensive real-world evaluation. Using a critical and positivist approach, this intrinsic and collective case study retrospectively examines two naturalistic evaluations to demonstrate how DEFINE could be used when conducting real-world comprehensive evaluations in health care settings. The variability between the cases and the evaluation designs are described and aligned to the DEFINE goals, steps and sub-steps. The majority of the theoretical steps of DEFINE were exemplified in both cases, although limited for knowledge translation efforts. Application of DEFINE to evaluate diverse programmes that target various chronic diseases is needed to further test the inclusivity and built-in flexibility of DEFINE and its role in encouraging more comprehensive knowledge translation. This case study shows how DEFINE could be used to structure or guide comprehensive evaluations of programmes and initiatives implemented in health care settings and support scale-up of successful innovations. Future use of the framework will continue to strengthen its value in guiding programme evaluation and informing health policy to reduce the burden of diabetes and other chronic diseases. © 2016 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.

  18. The role and utilisation of public health evaluations in Europe: a case study of national hand hygiene campaigns

    PubMed Central

    2014-01-01

    Background Evaluations are essential to judge the success of public health programmes. In Europe, the proportion of public health programmes that undergo evaluation remains unclear. The European Centre for Disease Prevention and Control sought to determine the frequency of evaluations amongst European national public health programmes by using national hand hygiene campaigns as an example of intervention. Methods A cohort of all national hand hygiene campaigns initiated between 2000 and 2012 was utilised for the analysis. The aim was to collect information about evaluations of hand hygiene campaigns and their frequency. The survey was sent to nominated contact points for healthcare-associated infection surveillance in European Union and European Economic Area Member States. Results Thirty-six hand hygiene campaigns in 20 countries were performed between 2000 and 2012. Of these, 50% had undergone an evaluation and 55% of those utilised the WHO hand hygiene intervention self-assessment tool. Evaluations utilised a variety of methodologies and indicators in assessing changes in hand hygiene behaviours pre and post intervention. Of the 50% of campaigns that were not evaluated, two thirds reported that both human and financial resource constraints posed significant barriers for the evaluation. Conclusion The study identified an upward trend in the number of hand hygiene campaigns implemented in Europe. It is likely that the availability of the internationally-accepted evaluation methodology developed by the WHO contributed to the evaluation of more hand hygiene campaigns in Europe. Despite this rise, hand hygiene campaigns appear to be under-evaluated. The development of simple, programme-specific, standardised guidelines, evaluation indicators and other evidence-based public health materials could help promote evaluations across all areas of public health. PMID:24507086

  19. Study on process evaluation model of students' learning in practical course

    NASA Astrophysics Data System (ADS)

    Huang, Jie; Liang, Pei; Shen, Wei-min; Ye, Youxiang

    2017-08-01

    In practical course teaching based on project object method, the traditional evaluation methods include class attendance, assignments and exams fails to give incentives to undergraduate students to learn innovatively and autonomously. In this paper, the element such as creative innovation, teamwork, document and reporting were put into process evaluation methods, and a process evaluation model was set up. Educational practice shows that the evaluation model makes process evaluation of students' learning more comprehensive, accurate, and fairly.

  20. Algunos Criterios para Evaluar Programas de Educacion Superior a Nivel de Posgrado: El Caso Particular de la Administracion Publica (Some Criteria to Evaluate Higher Education Programs at the Graduate Level: The Special Case of Public Administration).

    ERIC Educational Resources Information Center

    Valle, Victor M.

    Intended as a contribution to a workshop discussion on program evaluation in higher education, the paper covers five major evaluation issues. First, it deals with evaluation concepts, explaining the purposes of evaluation; pertinent terms; and the sources of evaluation in public health procedures, the scientific method, the systems approach, and…

  1. Nuclear criticality safety evaluation of SRS 9971 shipping package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vescovi, P.J.

    1993-02-01

    This evaluation is requested to revise the criticality evaluation used to generate Chapter 6 (Criticality Evaluation) of the Safety Analysis Report for Packaging (SARP) for shipment Of UO{sub 3} product from the Uranium Solidification Facility (USF) in the SRS 9971 shipping package. The pertinent document requesting this evaluation is included as Attachment I. The results of the evaluation are given in Attachment II which is written as Chapter 6 of a NRC format SARP.

  2. Evaluative priming in a semantic flanker task: ERP evidence for a mutual facilitation explanation.

    PubMed

    Schmitz, Melanie; Wentura, Dirk; Brinkmann, Thorsten A

    2014-03-01

    In semantic flanker tasks, target categorization response times are affected by the semantic compatibility of the flanker and target. With positive and negative category exemplars, we investigated the influence of evaluative congruency (whether flanker and target share evaluative valence) on the flanker effect, using behavioral and electrophysiological measures. We hypothesized a moderation of the flanker effect by evaluative congruency on the basis of the assumption that evaluatively congruent concepts mutually facilitate each other's activation (see Schmitz & Wentura in Journal of Experimental Psychology: Learning, Memory, and Cognition 38:984-1000, 2012). Applying an onset delay of 50 ms for the flanker, we aimed to decrease the facilitative effect of an evaluatively congruent flanker on target encoding and, at the same time, increase the facilitative effect of an evaluatively congruent target on flanker encoding. As a consequence of increased flanker activation in the case of evaluative congruency, we expected a semantically incompatible flanker to interfere with the target categorization to a larger extent (as compared with an evaluatively incongruent pairing). Confirming our hypotheses, the flanker effect significantly depended on evaluative congruency, in both mean response times and N2 mean amplitudes. Thus, the present study provided behavioral and electrophysiological evidence for the mutual facilitation of evaluatively congruent concepts. Implications for the representation of evaluative connotations of semantic concepts are discussed.

  3. Evaluative stimulus (in)congruency impacts performance in an unrelated task: evidence for a resource-based account of evaluative priming.

    PubMed

    Gast, Anne; Werner, Benedikt; Heitmann, Christina; Spruyt, Adriaan; Rothermund, Klaus

    2014-01-01

    In two experiments, we assessed evaluative priming effects in a task that was unrelated to the congruent or incongruent stimulus pairs. In each trial, participants saw two valent (positive or negative) pictures that formed evaluatively congruent or incongruent stimulus pairs and a letter that was superimposed on the second picture. Different from typical evaluative priming studies, participants were not required to respond to the second of the valent stimuli, but asked to categorize the letter that was superimposed on the second picture. We assessed the impact of the evaluative (in)congruency of the two pictures on the performance in responding to the letter. In addition, we manipulated attention to the evaluative dimension by asking participants in one experimental group to respond to the valence of the pictures on a subset of trials (evaluative task condition). In both experiments, we found evaluative priming effects in letter categorization responses: Participants categorized the letter faster (and sometimes more correctly) in trials with congruent picture-pairs. These effects were present only in the evaluative task condition. These findings can be explained with different resource-based accounts of evaluative priming and the additional assumption that attention to valence is necessary for evaluative congruency to affect processing resources. According to resource-based accounts valence-incongruent trials require more cognitive resources than valence-congruent trials (e.g., Hermans, Van den Broeck, & Eelen, 1998).

  4. Do economic evaluation studies inform effective healthcare resource allocation in Iran? A critical review of the literature.

    PubMed

    Haghparast-Bidgoli, Hassan; Kiadaliri, Aliasghar Ahmad; Skordis-Worrall, Jolene

    2014-01-01

    To aid informed health sector decision-making, data from sufficient high quality economic evaluations must be available to policy makers. To date, no known study has analysed the quantity and quality of available Iranian economic evaluation studies. This study aimed to assess the quantity, quality and targeting of economic evaluation studies conducted in the Iranian context. The study systematically reviewed full economic evaluation studies (n = 30) published between 1999 and 2012 in international and local journals. The findings of the review indicate that although the literature on economic evaluation in Iran is growing, these evaluations were of poor quality and suffer from several major methodological flaws. Furthermore, the review reveals that economic evaluation studies have not addressed the major health problems in Iran. While the availability of evidence is no guarantee that it will be used to aid decision-making, the absence of evidence will certainly preclude its use. Considering the deficiencies in the data identified by this review, current economic evaluations cannot be a useful source of information for decision makers in Iran. To improve the quality and overall usefulness of economic evaluations we would recommend; 1) developing clear national guidelines for the conduct of economic evaluations, 2) highlighting priority areas where information from such studies would be most useful and 3) training researchers and policy makers in the calculation and use of economic evaluation data.

  5. Pre-training evaluation and feedback improve medical students' skills in basic life support.

    PubMed

    Li, Qi; Ma, Er-Li; Liu, Jin; Fang, Li-Qun; Xia, Tian

    2011-01-01

    Evaluation and feedback are two factors that could influence simulation-based medical education and the time when they were delivered contributes their different effects. To investigate the impact of pre-training evaluation and feedback on medical students' performance in basic life support (BLS). Forty 3rd-year undergraduate medical students were randomly divided into two groups, C group (the control) and pre-training evaluation and feedback group (E&F group), each of 20. After BLS theoretical lecture, the C group received 45 min BLS training and the E&F group was individually evaluated (video-taped) in a mock cardiac arrest (pre-training evaluation). Fifteen minutes of group feedback related with the students' BLS performance in pre-training evaluation was given in the E&F group, followed by a 30-min BLS training. After BLS training, both groups were evaluated with one-rescuer BLS skills in a 3-min mock cardiac arrest scenario (post-training evaluation). The score from the post-training evaluation was converted to a percentage and was compared between the two groups. The score from the post-training evaluation was higher in the E&F group (82.9 ± 3.2% vs. 63.9 ± 13.4% in C group). In undergraduate medical students without previous BLS training, pre-training evaluation and feedback improve their performance in followed BLS training.

  6. Activations of the dorsolateral prefrontal cortex and thalamus during agentic self-evaluation are negatively associated with trait self-esteem.

    PubMed

    Jiang, Ke; Wu, Shi; Shi, Zhenhao; Liu, Mingyan; Peng, Maoying; Shen, Yang; Yang, Juan

    2018-08-01

    Individual self-esteem is dominated more by agency than by communion. However, prior research has mainly focused on one's agentic/communal self-evaluation, while little is known about how one endorses others' agentic/communal evaluation of the self. The present study investigated the associations between trait self-esteem and fundamental dimensions of social cognition, i.e. agency vs. communion, during both self-evaluation and endorsement of others' evaluation of oneself. We also investigated the neural mechanisms underlying the relationship between trait self-esteem and agentic self-evaluation. Behavioral results revealed that self-esteem was positively correlated with the agentic ratings from self-evaluation and endorsement of others' evaluation of the self, and that the agentic self-evaluation was a significant full mediator between self-esteem and endorsement of others' agentic evaluation. Whole-brain regression analysis revealed that self-esteem was negatively correlated with right dorsolateral prefrontal and bilateral thalamic response to agentic self-evaluation. A possible interpretation is that low self-esteem people both hold a more self-critical attitude about the self and have less certainty or clarity of their self-concepts than high self-esteem people do. These findings have important implication for understanding the neural and cognitive mechanisms underlying self-esteem's effect on one's agentic self-evaluations. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Are mastery and ability goals both adaptive? Evaluation, initial goal construction and the quality of task engagement.

    PubMed

    Butler, Ruth

    2006-09-01

    The aims of this research were to examine the predictions that (a) the kind of evaluation pupils anticipate will influence their initial achievement goals and, as a result, the quality and consequences of task engagement; and (b) initial mastery goals will promote new learning and intrinsic motivation and initial ability goals will promote entity beliefs that ability is fixed. Participants were 312 secondary school pupils at ages 13-15. Pupils expected to receive normative evaluation, temporal evaluation (scores over time) or no evaluation. Mastery and ability goals were measured before pupils worked on challenging problems; intrinsic motivation and entity beliefs were measured after task completion. Anticipation of temporal evaluation enhanced initial mastery goals, anticipation of normative evaluation enhanced ability goals and the no-evaluation condition undermined both. Anticipation of temporal evaluation enhanced new learning (strategy acquisition and performance gains) and intrinsic motivation both directly and by enhancing initial mastery goals; anticipation of normative evaluation enhanced entity beliefs by enhancing ability goals. Results confirmed that evaluation conveys potent cues as to the goals of activity. They also challenged claims that both mastery and ability goals can be adaptive by demonstrating that these were differentially associated with positive versus negative processes and outcomes. Results have theoretical and applied implications for understanding and improving evaluative practices and student motivation.

  8. Do economic evaluation studies inform effective healthcare resource allocation in Iran? A critical review of the literature

    PubMed Central

    2014-01-01

    To aid informed health sector decision-making, data from sufficient high quality economic evaluations must be available to policy makers. To date, no known study has analysed the quantity and quality of available Iranian economic evaluation studies. This study aimed to assess the quantity, quality and targeting of economic evaluation studies conducted in the Iranian context. The study systematically reviewed full economic evaluation studies (n = 30) published between 1999 and 2012 in international and local journals. The findings of the review indicate that although the literature on economic evaluation in Iran is growing, these evaluations were of poor quality and suffer from several major methodological flaws. Furthermore, the review reveals that economic evaluation studies have not addressed the major health problems in Iran. While the availability of evidence is no guarantee that it will be used to aid decision-making, the absence of evidence will certainly preclude its use. Considering the deficiencies in the data identified by this review, current economic evaluations cannot be a useful source of information for decision makers in Iran. To improve the quality and overall usefulness of economic evaluations we would recommend; 1) developing clear national guidelines for the conduct of economic evaluations, 2) highlighting priority areas where information from such studies would be most useful and 3) training researchers and policy makers in the calculation and use of economic evaluation data. PMID:25050084

  9. Evaluating a Professional Development Programme for Implementation of a Multidisciplinary Science Subject

    ERIC Educational Resources Information Center

    Visser, Talitha C.; Coenders, Fer G. M.; Terlouw, Cees; Pieters, Jules

    2013-01-01

    This study aims to evaluate a professional development programme that prepares and assists teachers with the implementation of a multidisciplinary science module, basing the evaluation on "participants' reactions," the first level of Guskey's five-level model for evaluation (2002). Positive evaluations at the higher levels in Guskey's…

  10. Collecting and Using Staff Performance Information for School Improvement.

    ERIC Educational Resources Information Center

    Tucker, Null A.

    Use of multiple data sources for evaluation of faculty performance by the DeKalb County (Georgia) School System is described. Focus is on two evaluations, the administrator evaluation and the counselor evaluation; the former has been employed longer than any other component of the evaluation system, while the latter is scheduled for its second…

  11. Report of the Inter-Organizational Committee on Evaluation. Internal Evaluation Model.

    ERIC Educational Resources Information Center

    White, Roy; Murray, John

    Based upon the premise that school divisions in Manitoba, Canada, should evaluate and improve upon themselves, this evaluation model was developed. The participating personnel and the development of the evaluation model are described. The model has 11 parts: (1) needs assessment; (2) statement of objectives; (3) definition of objectives; (4)…

  12. Student Evaluation of Teaching Effectiveness: An Assessment of Student Perception and Motivation.

    ERIC Educational Resources Information Center

    Chen, Yining; Hoshower, Leon B

    2003-01-01

    Evaluated key factors motivating students to participate in teaching evaluation. Found that students generally consider an improvement in teaching to be the most attractive outcome. The second most attractive outcome was using teaching evaluations to improve course content and format. Using teaching evaluations for a professor's tenure, promotion,…

  13. Planning Evaluation through the Program Life Cycle

    ERIC Educational Resources Information Center

    Scheirer, Mary Ann; Mark, Melvin M.; Brooks, Ariana; Grob, George F.; Chapel, Thomas J.; Geisz, Mary; McKaughan, Molly; Leviton, Laura

    2012-01-01

    Linking evaluation methods to the several phases of a program's life cycle can provide evaluation planners and funders with guidance about what types of evaluation are most appropriate over the trajectory of social and educational programs and other interventions. If methods are matched to the needs of program phases, evaluation can and should…

  14. Building an Evaluative Culture: The Key to Effective Evaluation and Results Management

    ERIC Educational Resources Information Center

    Mayne, John

    2009-01-01

    As many reviews of results-based performance systems have noted, a weak evaluative culture in an organization undermines attempts at building an effective evaluation and/or results management regime. This article sets out what constitutes a strong evaluative culture where information on performance results is deliberately sought in order to learn…

  15. 40 CFR 35.927-2 - Sewer system evaluation survey.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Sewer system evaluation survey. 35.927... § 35.927-2 Sewer system evaluation survey. (a) The sewer system evaluation survey shall identify the... results of the sewer system evaluation survey. In addition, the report shall include: (1) A justification...

  16. 40 CFR 35.927-2 - Sewer system evaluation survey.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Sewer system evaluation survey. 35.927... § 35.927-2 Sewer system evaluation survey. (a) The sewer system evaluation survey shall identify the... results of the sewer system evaluation survey. In addition, the report shall include: (1) A justification...

  17. 40 CFR 35.927-2 - Sewer system evaluation survey.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Sewer system evaluation survey. 35.927... § 35.927-2 Sewer system evaluation survey. (a) The sewer system evaluation survey shall identify the... results of the sewer system evaluation survey. In addition, the report shall include: (1) A justification...

  18. 40 CFR 35.927-2 - Sewer system evaluation survey.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Sewer system evaluation survey. 35.927... § 35.927-2 Sewer system evaluation survey. (a) The sewer system evaluation survey shall identify the... results of the sewer system evaluation survey. In addition, the report shall include: (1) A justification...

  19. Five Steps for Improving Evaluation Reports by Using Different Data Analysis Methods.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    Although methodological integrity is not the sole determinant of the value of a program evaluation, decision-makers do have a right, at a minimum, to be able to expect competent work from evaluators. This paper explores five areas where evaluators might improve methodological practices. First, evaluation reports should reflect the limited…

  20. Teaching and Learning: Highlighting the Parallels between Education and Participatory Evaluation.

    ERIC Educational Resources Information Center

    Vanden Berk, Eric J.; Cassata, Jennifer Coyne; Moye, Melinda J.; Yarbrough, Donald B.; Siddens, Stephanie K.

    As an evaluation team trained in educational psychology and committed to participatory evaluation and its evolution, the researchers have found the parallel between evaluator-stakeholder roles in the participatory evaluation process and educator-student roles in educational psychology theory to be important. One advantage then is that the theories…

  1. Commentary: Can This Evaluation Be Saved?

    ERIC Educational Resources Information Center

    Ginsberg, Pauline E.

    2004-01-01

    Can this evaluation be saved? More precisely, can this evaluation be saved in such a way that both evaluator and client feel satisfied that their points of view were respected and both agree that the evaluation itself provides valid information obtained in a principled manner? Because the scenario describes a preliminary discussion and no contract…

  2. Examining Values, Use, and Role in Evaluation: Prospects for a Broadened View

    ERIC Educational Resources Information Center

    Boulay, David A.; Han, Heeyoung

    2008-01-01

    This article reviews evaluation studies published in the HRD (human resource development) field. The authors further discuss general evaluation theories in terms of value, use, and evaluator role. The comparison of this literature suggests that evaluation in HRD has been limited by narrow perspectives. The authors attribute this narrow notion of…

  3. Principles, Promises, and a Personal Plea: What Is an Evaluator to Do?

    ERIC Educational Resources Information Center

    McDonald, Katherine E.; Myrick, Shannon E.

    2008-01-01

    The client of a student evaluation team has requested that the evaluators provide confidential identifying information gathered in the course of the evaluation. Here, the authors consider their response to the client's request. Specifically, they draw from professional principles developed to guide ethical decision making for evaluators and…

  4. 20 CFR 416.985 - How we evaluate other visual impairments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false How we evaluate other visual impairments. 416... evaluate other visual impairments. If you are not blind as defined in the law, we will evaluate a visual impairment the same as we evaluate other impairments in determining disability. Although you will not qualify...

  5. Encyclopedia of Educational Evaluation: Concepts and Techniques for Evaluating Education and Training Programs.

    ERIC Educational Resources Information Center

    Anderson, Scarvia B.; And Others

    Arranged like an encyclopedia, this book, addressed to directors and sponsors of education/training programs, as well as evaluators and those studying to become evaluators, unifies and systematizes the field of evaluation by organizing its main concepts and techniques into one volume. Researched and documented articles, contributed by recognized…

  6. The Evaluator's Role in Recommending Program Closure: A Model for Decision Making and Professional Responsibility

    ERIC Educational Resources Information Center

    Eddy, Rebecca M.; Berry, Tiffany

    2009-01-01

    Evaluators face challenges when programs consistently fail to meet expectations for performance or improvement and consequently, evaluators may recommend that closing a program is the most prudent course of action. However, the evaluation literature provides little guidance regarding when an evaluator might recommend program closure. Given…

  7. Course Evaluation in Sweden--When, How, What and Why

    ERIC Educational Resources Information Center

    Cronholm, Stefan

    2010-01-01

    This study is about course evaluation in Swedish higher education. Performing course evaluation is regulated in Swedish law. Despite this, only half of the courses are evaluated. The aim of this study is to understand why satisfactory course evaluations not are performed. Problems are identified from a student perspective and the paper provides…

  8. Content Analysis of Evaluation Instruments Used for Student Evaluation of Classroom Teaching Performance in Higher Education.

    ERIC Educational Resources Information Center

    Tagomori, Harry T.; Bishop, Laurence A.

    A major argument against evaluation of teacher performance by students pertains to the instruments being used. Colleges conduct instructional evaluation using instruments they devise, borrow, adopt, or adapt from other institutions. Whether these instruments are tested for content validity is unknown. This study determined how evaluation questions…

  9. The Development of Logical Structures for E-Learning Evaluation

    ERIC Educational Resources Information Center

    Tudevdagva, Uranchimeg; Hardt, Wolfram; Dolgor, Jargalmaa

    2013-01-01

    This paper deals with development of logical structures for e-learning evaluation. Evaluation is a complex task into which many different groups of people are involved. As a rule these groups have different understanding and varying expectations on e-learning evaluation. Using logical structures for e-learning evaluation we can join the different…

  10. Consequences of No Child Left Behind on Evaluation Purpose, Design, and Impact

    ERIC Educational Resources Information Center

    Mabry, Linda

    2008-01-01

    As an outgrowth of No Child Left Behind's narrow definition of scientifically based research, the priority given to certain quantitative evaluation designs has sparked debate among those in the evaluation community. Federal mandates for particular evaluation methodologies run counter to evaluation practice and to the direction of most evaluation…

  11. Evaluating Change in Medical School Curricula: How Did We Know Where We Were Going?

    ERIC Educational Resources Information Center

    Mahaffy, John; Gerrity, Martha S.

    1998-01-01

    Compares and contrasts the primary outcomes and methods used to evaluate curricular changes at eight medical schools participating in a large-scale medical curriculum development project. Describes how the evaluative data, both quantitative and qualitative, were collected, and how evaluation drove curricular change. Although the evaluations were…

  12. Evaluating the Reference Interview: A Theoretical Discussion of the Desirability and Achievability of Evaluation.

    ERIC Educational Resources Information Center

    Smith, Lisa L.

    1991-01-01

    Review and examination of the current literature on reference interview evaluation explores the degree to which such evaluative practices are both desirable and achievable. It is concluded that, if both quantitative and qualitative techniques are appropriately used, accurate mechanisms of evaluation are possible and desirable. (17 references) (LRW)

  13. Textbook Evaluation for the Students of Speech Therapy

    ERIC Educational Resources Information Center

    Jamshidi, Tahereh; Soori, Afshin

    2013-01-01

    This study aimed to evaluate an ESP textbook in terms of McDonough and Shaw (2003) based on external and internal evaluation. The ESP textbook was "Special English for Computer Sciences" (2010) by Hojjat Baghban. This study also discussed the external evaluation and a detailed evaluation of a chapter of the ESP textbook. This ESP…

  14. Using Recommendations in Evaluation: A Decision-Making Framework for Evaluators

    ERIC Educational Resources Information Center

    Iriti, Jennifer E.; Bickel, William E.; Nelson, Catherine Awsumb

    2005-01-01

    Is it appropriate and useful for evaluators to use findings to make recommendations? If so, under what circumstances? How specific should they be? This article presents a decision-making framework for the appropriateness of recommendations in varying contexts. On the basis of reviews of evaluation theory, selected evaluation reports, and feedback…

  15. A Comparison of Formative and Summative Evaluation.

    ERIC Educational Resources Information Center

    Belenski, Mary Jo

    Formative and summative evaluations in education are compared, and appropriate uses of these methods in program evaluation are discussed. The main purpose of formative evaluation is to determine a level of mastery of a learning task, along with discovering any part of the task that was not mastered. In other words, formative evaluation focuses the…

  16. An Introduction to Context and Its Role in Evaluation Practice

    ERIC Educational Resources Information Center

    Fitzpatrick, Jody L.

    2012-01-01

    Evaluators have written about the need to consider context in conducting evaluations, but most such admonitions are broad. Context is not developed fully. This chapter reviews the evaluation literature on context and discusses the two areas in which context has been more carefully considered by evaluators: the culture of program participants when…

  17. The Future of Evaluation: Catching Rocks with Cauldrons.

    ERIC Educational Resources Information Center

    Love, Arnold J.

    2001-01-01

    Explores the effects evidence-based practice and the guideline movement will have on evaluation in the future and discusses the impact on evaluation of three aspects of the current information revolution: (1) e-government; (2) new approaches to data access; and (3) real-time evaluation. Also considers the sources of evaluation innovation and the…

  18. Evaluating the "Evaluative State": Implications for Research in Higher Education.

    ERIC Educational Resources Information Center

    Dill, David D.

    1998-01-01

    Examines the "evaluative state" that is, public management-based evaluation systems--in the context of experiences in the United Kingdom and New Zealand, and suggests that further research is needed to examine problems in the evaluative state itself, in how market competition impacts upon it, and how academic oligarchies influence the…

  19. Strategies for Increasing Response Rates for Online End-of-Course Evaluations

    ERIC Educational Resources Information Center

    Chapman, Diane D.; Joines, Jeffrey A.

    2017-01-01

    Student Evaluations of Teaching (SETs) are used by nearly all public and private universities as one means to evaluate teaching effectiveness. A majority of these universities have transitioned from the traditional paper-based evaluations to online evaluations, resulting in a decline in overall response rates. This has led to scepticism about the…

  20. Navigating Theory and Practice through Evaluation Fieldwork: Experiences of Novice Evaluation Practitioners

    ERIC Educational Resources Information Center

    Chouinard, Jill Anne; Boyce, Ayesha S.; Hicks, Juanita; Jones, Jennie; Long, Justin; Pitts, Robyn; Stockdale, Myrah

    2017-01-01

    To explore the relationship between theory and practice in evaluation, we focus on the perspectives and experiences of student evaluators, as they move from the classroom to an engagement with the social, political, and cultural dynamics of evaluation in the field. Through reflective journals, postcourse interviews, and facilitated group…

  1. 75 FR 65700 - 60-Day Notice of Proposed Information Collection: R/PPR Evaluation and Measurement Unit...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-26

    ... and Resources Evaluation and Measurement Unit (R/PPR EMU). Form Number: Survey numbers generated as... Evaluation and Measurement Unit, Evaluation Survey Question Bank ACTION: Notice of request for public... with the Paperwork Reduction Act of 1995. Title of Information Collection: R/PPR Evaluation and...

  2. A New Approach to Evaluation of University Teaching Considering Heterogeneity of Students' Preferences

    ERIC Educational Resources Information Center

    Kuzmanovic, Marija; Savic, Gordana; Popovic, Milena; Martic, Milan

    2013-01-01

    Students' evaluations of teaching are increasingly used by universities to evaluate teaching performance. However, these evaluations are controversial mainly due to the fact that students value various aspects of excellent teaching differently. Therefore, in this paper we propose a new approach to students' evaluations of university…

  3. 25 CFR 1000.354 - What is a trust evaluation?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... that the functions are performed in accordance with trust standards as defined by Federal law. Trust... 25 Indians 2 2010-04-01 2010-04-01 false What is a trust evaluation? 1000.354 Section 1000.354... Trust Evaluation Review Annual Trust Evaluations § 1000.354 What is a trust evaluation? A trust...

  4. Training Evaluation as an Integral Component of Training for Performance.

    ERIC Educational Resources Information Center

    Lapp, H. J., Jr.

    A training evaluation system should address four major areas: reaction, learning, behavior, and results. The training evaluation system at GPU Nuclear Corporation addresses each of these areas through practical approaches such as course and program evaluation. GPU's program evaluation instrument uses a Likert-type scale to assess task development,…

  5. Program Evaluation of the Associate of Arts Degree. Revised.

    ERIC Educational Resources Information Center

    2003

    This document is the program evaluation of the associate of Arts Degree in Holmes Community College (Mississippi) that was completed in 2001. The Southern Association of Colleges and Schools mandate the evaluation so that all colleges have the opportunity to evaluate themselves and use the results of the evaluation to improve instruction. The…

  6. Fudging the Numbers: Distributing Chocolate Influences Student Evaluations of an Undergraduate Course

    ERIC Educational Resources Information Center

    Youmans, Robert J.; Jee, Benjamin D.

    2007-01-01

    Student evaluations provide important information about teaching effectiveness. Research has shown that student evaluations can be mediated by unintended aspects of a course. In this study, we examined whether an event unrelated to a course would increase student evaluations. Six discussion sections completed course evaluations administered by an…

  7. Higher Education Trends (1997-1999): Program Evaluation. ERIC-HE Trends.

    ERIC Educational Resources Information Center

    Kezar, Adrianna J.

    The amount of literature on program evaluation decreased in 1996, continuing a trend begun in the late 1980s. One exception to this is the literature on assessment. Another frequent issue is the technique of evaluation. Many examples of research on evaluation are from international settings, where accountability and evaluation appear to be…

  8. Program Evaluation at HEW: Research versus Reality. Part 2: Education.

    ERIC Educational Resources Information Center

    Abert, James G., Ed.

    Intended for both the student and the practitioner of evaluation, this book describes the state of the practice of program evaluation. Its focus is mainly institutional. Results of evaluation studies are of secondary importance. An introductory chapter written by the editor discusses evaluation at the Office of Education from 1967 through 1973.…

  9. Evaluators' Decision Making: The Relationship between Theory, Practice, and Experience

    ERIC Educational Resources Information Center

    Tourmen, Claire

    2009-01-01

    How do evaluation practitioners make choices when they evaluate a program? What function do evaluation theories play in practice? In this article, I report on an exploratory study that examined evaluation practices in France. The research began with observations of practitioners' activities, with a particular focus on the phases of evaluation…

  10. Evaluate to Improve: Useful Approaches to Student Evaluation

    ERIC Educational Resources Information Center

    Golding, Clinton; Adam, Lee

    2016-01-01

    Many teachers in higher education use feedback from students to evaluate their teaching, but only some use these evaluations to improve their teaching. One important factor that makes the difference is the teacher's approach to their evaluations. In this article, we identify some useful approaches for improving teaching. We conducted focus groups…

  11. The Oral History of Evaluation: The Professional Development of Evert Vedung

    ERIC Educational Resources Information Center

    Tranquist, Joakim

    2015-01-01

    In the vast evaluation literature, there are numerous accounts describing the emergence of the field of evaluation. However, texts on evaluation history often describe how structural conditions for conducting evaluation have changed over time, often from an American perspective. Inspired by the Oral History Team, the purpose of this article is to…

  12. Taiwan Teacher Preparation Program Evaluation: Some Critical Perspectives

    ERIC Educational Resources Information Center

    Liu, Tze-Chang

    2015-01-01

    This paper focuses on the influences and changes of recent Taiwan teacher preparation program evaluation (TTPPE) as one of the national evaluation projects conducted by the Higher Education Evaluation and Accreditation Council of Taiwan. The main concerns are what kind of ideology is transformed through the policy by means of evaluation, and what…

  13. 25 CFR 1000.356 - May the trust evaluation process be used for additional reviews?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INDIAN SELF-DETERMINATION AND EDUCATION ACT Trust Evaluation Review Annual Trust Evaluations § 1000.356 May the trust evaluation process be used for additional reviews? Yes, if the parties agree. ... 25 Indians 2 2010-04-01 2010-04-01 false May the trust evaluation process be used for additional...

  14. How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach

    ERIC Educational Resources Information Center

    Feistauer, Daniela; Richter, Tobias

    2017-01-01

    The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…

  15. Making Evaluation Work for You: Ideas for Deriving Multiple Benefits from Evaluation

    ERIC Educational Resources Information Center

    Jayaratne, K. S. U.

    2016-01-01

    Increased demand for accountability has forced Extension educators to evaluate their programs and document program impacts. Due to this situation, some Extension educators may view evaluation simply as the task, imposed on them by administrators, of collecting outcome and impact data for accountability. They do not perceive evaluation as a useful…

  16. External Evaluation as Contract Work: The Production of Evaluator Identity

    ERIC Educational Resources Information Center

    Sturges, Keith M.

    2014-01-01

    Extracted from a larger study of the educational evaluation profession, this qualitative analysis explores how evaluator identity is shaped with constant reference to political economy, knowledge work, and personal history. Interviews with 24 social scientists who conduct or have conducted evaluations as a major part of their careers examined how…

  17. Are Online Student Evaluations of Faculty Influenced by the Timing of Evaluations?

    ERIC Educational Resources Information Center

    McNulty, John A.; Gruener, Gregory; Chandrasekhar, Arcot; Espiritu, Baltazar; Hoyt, Amy; Ensminger, David

    2010-01-01

    Student evaluations of faculty are important components of the medical curriculum and faculty development. To improve the effectiveness and timeliness of student evaluations of faculty in the physiology course, we investigated whether evaluations submitted during the course differed from those submitted after completion of the course. A secure…

  18. Program Evaluation: The Board Game--An Interactive Learning Tool for Evaluators

    ERIC Educational Resources Information Center

    Febey, Karen; Coyne, Molly

    2007-01-01

    The field of program evaluation lacks interactive teaching tools. To address this pedagogical issue, the authors developed a collaborative learning technique called Program Evaluation: The Board Game. The authors present the game and its development in this practitioner-oriented article. The evaluation board game is an adaptable teaching tool…

  19. GIS in Evaluation: Utilizing the Power of Geographic Information Systems to Represent Evaluation Data

    ERIC Educational Resources Information Center

    Azzam, Tarek; Robinson, David

    2013-01-01

    This article provides an introduction to geographic information systems (GIS) and how the technology can be used to enhance evaluation practice. As a tool, GIS enables evaluators to incorporate contextual features (such as accessibility of program sites or community health needs) into evaluation designs and highlights the interactions between…

  20. A Need for a Framework for Curriculum Evaluation in Oman

    ERIC Educational Resources Information Center

    Al-Jardani, Khalid Salim; Siraj, Saedah; Abedalaziz, Nabeel

    2012-01-01

    The field of curriculum evaluation is a key part of the educational process. This means that this area needs to be developed continuously and requires ongoing research. This paper highlights curriculum evaluation in Oman, different evaluation procedures and methods and instruments used. The need for a framework for curriculum evaluation is a vital…

  1. Working with External Evaluators

    ERIC Educational Resources Information Center

    Silver, Lauren; Burg, Scott

    2015-01-01

    Hiring an external evaluator is not right for every museum or every project. Evaluations are highly situational, grounded in specific times and places; each one is unique. The museum and the evaluator share equal responsibility in an evaluation's success, so it is worth investing time and effort to ensure that both are clear about the goals,…

  2. Creating Robust Evaluation of ATE Projects

    ERIC Educational Resources Information Center

    Eddy, Pamela L.

    2017-01-01

    Funded grant projects all involve some form of evaluation, and Advanced Technological Education (ATE) grants are no exception. Program evaluation serves as a critical component not only for evaluating if a project has met its intended and desired outcomes, but the evaluation process is also a central feature of the grant application itself.…

  3. The Effects of Two Different Management Styles on Internal Evaluation or Boss and Evaluator: Conflict or Cooperation?

    ERIC Educational Resources Information Center

    Feigenbaum, Laurel; And Others

    This paper discusses observations found upon examining district management styles (autocratic vs. democratic) in evaluation offices and evaluation management levels (federal, state, and local) in relationship to the creativity and effectiveness of internal evaluators and usefulness of feedback information to the school site. (Author)

  4. 42 CFR 456.243 - Content of medical care evaluation studies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Content of medical care evaluation studies. 456.243... Ur Plan: Medical Care Evaluation Studies § 456.243 Content of medical care evaluation studies. Each medical care evaluation study must— (a) Identify and analyze medical or administrative factors related to...

  5. 42 CFR 456.143 - Content of medical care evaluation studies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Content of medical care evaluation studies. 456.143...: Medical Care Evaluation Studies § 456.143 Content of medical care evaluation studies. Each medical care evaluation study must— (a) Identify and analyze medical or administrative factors related to the hospital's...

  6. 76 FR 789 - Office of Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-06

    ... this agenda. The Committee will provide advice regarding future research efforts to inform HHS about... Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation AGENCY... for Head Start Research and Evaluation. General Function of Committee: The Advisory Committee for Head...

  7. 76 FR 17130 - Office of Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-28

    ... Impact Study fits within this agenda. The Committee will provide advice regarding future research efforts... Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation AGENCY... for Head Start Research and Evaluation. General Function of Committee: The Advisory Committee for Head...

  8. Reflections on Evaluation Costs: Direct and Indirect. Evaluation Productivity Project.

    ERIC Educational Resources Information Center

    Alkin, Marvin; Ruskus, Joan A.

    This paper summarizes views on the costs of evaluation developed as part of CSE's Evaluation Productivity Project. In particular, it focuses on ideas about the kinds of costs associated with factors known to effect evaluation utilization. The first section deals with general issues involved in identifying and valuing cost components, particularly…

  9. National Fuel Cell Technology Evaluation Center | Hydrogen and Fuel Cells |

    Science.gov Websites

    NREL National Fuel Cell Technology Evaluation Center National Fuel Cell Technology Evaluation Center The National Fuel Cell Technology Evaluation Center (NFCTEC) at NREL's Energy Systems Integration Cell Technology Evaluation Center to process and analyze data for a variety of hydrogen and fuel cell

  10. Evaluating the Impact of Leadership Development: A Professional Guide

    ERIC Educational Resources Information Center

    Martineau, Jennifer; Hannum, Kelly

    2004-01-01

    Scratch the surface of any successful organization and readers will likely find systems designed to evaluate how well it runs. The approach to evaluation presented in this book can be applied in a variety of contexts, but the focus here is on the evaluation of leadership development initiatives. Effective evaluations keep leadership development…

  11. Practical Application of Aspiration as an Outcome Indicator in Extension Evaluation

    ERIC Educational Resources Information Center

    Jayaratne, K. S. U.

    2010-01-01

    Extension educators need simple and accurate evaluation tools for program evaluation. This article explains how to use aspiration as an outcome indicator in Extension evaluation and introduces a practical evaluation tool. Aspiration can be described as the readiness for change. By recording participants' levels of aspiration, we will be able to…

  12. 38 CFR 18.406 - Remedial action, voluntary action and self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., voluntary action and self-evaluation. 18.406 Section 18.406 Pensions, Bonuses, and Veterans' Relief... Basis of Handicap General Provisions § 18.406 Remedial action, voluntary action and self-evaluation. (a...-evaluation. (1) A recipient shall, within one year of the effective date of this part: (i) Evaluate with the...

  13. Thick Slice and Thin Slice Teaching Evaluations

    ERIC Educational Resources Information Center

    Tom, Gail; Tong, Stephanie Tom; Hesse, Charles

    2010-01-01

    Student-based teaching evaluations are an integral component to institutions of higher education. Previous work on student-based teaching evaluations suggest that evaluations of instructors based upon "thin slice" 30-s video clips of them in the classroom correlate strongly with their end of the term "thick slice" student evaluations. This study's…

  14. RBS Career Education. Evaluation Planning Manual. Education Is Going to Work.

    ERIC Educational Resources Information Center

    Kershner, Keith M.

    Designed for use with the Research for Better Schools career education program, this evaluation planning manual focuses on procedures and issues central to planning the evaluation of an educational program. Following a statement on the need for evaluation, nine sequential steps for evaluation planning are discussed. The first two steps, program…

  15. Metrics, The Measure of Your Future: Materials Evaluation Forms.

    ERIC Educational Resources Information Center

    Troy, Joan B.

    Three evaluation forms are contained in this publication by the Winston-Salem/Forsyth Metric Education Project to be used in conjunction with their materials. They are: (1) Field-Test Materials Evaluation Form; (2) Student Materials Evaluation Form; and (3) Composite Materials Evaluation Form. The questions in these forms are phrased so they can…

  16. The role of affect and cognition in health decision making.

    PubMed

    Keer, Mario; van den Putte, Bas; Neijens, Peter

    2010-03-01

    Both affective and cognitive evaluations of behaviours have been allocated various positions in theoretical models of decision making. Most often, they have been studied as direct determinants of either intention or overall evaluation, but these two possible positions have never been compared. The aim of this study was to determine whether affective and cognitive evaluations influence intention directly, or whether their influence is mediated by overall evaluation. A sample of 300 university students filled in questionnaires on their affective, cognitive, and overall evaluations in respect of 20 health behaviours. The data were interpreted using mediation analyses with the application of path modelling. Both affective and cognitive evaluations were found to have significantly predicted intention. The influence of affective evaluation was largely direct for each of the behaviours studied, whereas that of cognitive evaluation was partially direct and partially mediated by overall evaluation. These results indicate that decisions regarding the content of persuasive communication (affective vs. cognitive) are highly dependent on the theoretical model chosen. It is suggested that affective evaluation should be included as a direct determinant of intention in theories of decision making when predicting health behaviours.

  17. Agent-based modeling as a tool for program design and evaluation.

    PubMed

    Lawlor, Jennifer A; McGirr, Sara

    2017-12-01

    Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Evaluation of the clinical protocol quality for family planning services of people living with HIV/AIDS.

    PubMed

    Brasil, Raquel Ferreira Gomes; Silva, Maria Josefina da; Moura, Escolástica Rejane Ferreira

    2018-01-01

    To evaluate the quality of a clinical protocol for family planning care for people living with HIV/AIDS. An evaluative study based on the six domains of the Appraisal of Guidelines for Research & Evaluation II and on Pearson's Coefficient of Variation. The protocol reached between 88.8% and 100.0% quality in the domains of the Appraisal of Guidelines for Research & Evaluation II and 93.3% in the overall evaluation. The obtained Pearson's coefficient of variation was between zero and 18.6. Considering that a minimum percentage of 70.0% was adopted for the quality attributed by the evaluators, quality has been achieved for all domains of the Appraisal of Guidelines for Research & Evaluation II. As a coefficient for all domains was less than 25%, we can infer that the scores attributed by the evaluators were linear or homogeneous, meaning high agreement between them. The protocol was evaluated as a quality instrument, recommended for use by health professionals who deal with family planning for people living with HIV/AIDS.

  19. Program Evaluation Resources

    EPA Pesticide Factsheets

    These resources list tools to help you conduct evaluations, find organizations outside of EPA that are useful to evaluators, and find additional guides on how to do evaluations from organizations outside of EPA.

  20. The role of quantitative safety evaluation in regulatory decision making of drugs.

    PubMed

    Chakravarty, Aloka G; Izem, Rima; Keeton, Stephine; Kim, Clara Y; Levenson, Mark S; Soukup, Mat

    2016-01-01

    Evaluation of safety is a critical component of drug review at the US Food and Drug Administration (FDA). Statisticians are playing an increasingly visible role in quantitative safety evaluation and regulatory decision-making. This article reviews the history and the recent events relating to quantitative drug safety evaluation at the FDA. The article then focuses on five active areas of quantitative drug safety evaluation and the role Division of Biometrics VII (DBVII) plays in these areas, namely meta-analysis for safety evaluation, large safety outcome trials, post-marketing requirements (PMRs), the Sentinel Initiative, and the evaluation of risk from extended/long-acting opioids. This article will focus chiefly on developments related to quantitative drug safety evaluation and not on the many additional developments in drug safety in general.

Top