Sample records for praeoperativen angiographischen evaluation

  1. Evaluating evaluation forms form.

    PubMed

    Smith, Roger P

    2004-02-01

    To provide a tool for evaluating evaluation forms. A new form has been developed and tested on itself and a sample of evaluation forms obtained from the graduate medical education offices of several local universities. Additional forms from hospital administration were also subjected to analysis. The new form performed well when applied to itself. The form performed equally well when applied to the other (subject) forms, although their scores were embarrassingly poor. A new form for evaluating evaluation forms is needed, useful, and now available.

  2. Documenting Evaluation Use: Guided Evaluation Decisionmaking. Evaluation Productivity Project.

    ERIC Educational Resources Information Center

    Burry, James

    This paper documents the evaluation use process among districts using the Guide for Evaluation Decision Makers, published by the Center for the Study of Evaluation (CSE) during the 1984-85 school year. Included are the following: (1) a discussion of research that led to conclusions concerning the administrator's role in evaluation use; (2) a…

  3. Evaluator and Program Manager Perceptions of Evaluation Capacity and Evaluation Practice

    ERIC Educational Resources Information Center

    Fierro, Leslie A.; Christie, Christina A.

    2017-01-01

    The evaluation community has demonstrated an increased emphasis and interest in evaluation capacity building in recent years. A need currently exists to better understand how to measure evaluation capacity and its potential outcomes. In this study, we distributed an online questionnaire to managers and evaluation points of contact working in…

  4. Evaluating how we evaluate.

    PubMed

    Vale, Ronald D

    2012-09-01

    Evaluation of scientific work underlies the process of career advancement in academic science, with publications being a fundamental metric. Many aspects of the evaluation process for grants and promotions are deeply ingrained in institutions and funding agencies and have been altered very little in the past several decades, despite substantial changes that have taken place in the scientific work force, the funding landscape, and the way that science is being conducted. This article examines how scientific productivity is being evaluated, what it is rewarding, where it falls short, and why richer information than a standard curriculum vitae/biosketch might provide a more accurate picture of scientific and educational contributions. The article also explores how the evaluation process exerts a profound influence on many aspects of the scientific enterprise, including the training of new scientists, the way in which grant resources are distributed, the manner in which new knowledge is published, and the culture of science itself.

  5. Evaluating how we evaluate

    PubMed Central

    Vale, Ronald D.

    2012-01-01

    Evaluation of scientific work underlies the process of career advancement in academic science, with publications being a fundamental metric. Many aspects of the evaluation process for grants and promotions are deeply ingrained in institutions and funding agencies and have been altered very little in the past several decades, despite substantial changes that have taken place in the scientific work force, the funding landscape, and the way that science is being conducted. This article examines how scientific productivity is being evaluated, what it is rewarding, where it falls short, and why richer information than a standard curriculum vitae/biosketch might provide a more accurate picture of scientific and educational contributions. The article also explores how the evaluation process exerts a profound influence on many aspects of the scientific enterprise, including the training of new scientists, the way in which grant resources are distributed, the manner in which new knowledge is published, and the culture of science itself. PMID:22936699

  6. Meta-Evaluation

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.

    2011-01-01

    Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…

  7. Evaluation as Empowerment and the Evaluator as Enabler.

    ERIC Educational Resources Information Center

    Whitmore, Elizabeth

    One rationale for implementing a particular evaluation approach is the empowerment of stakeholders. Evaluation as empowerment and possible links between empowerment and increased utilization of evaluation results are explored. Evaluation as empowerment assumes that individuals need to be personally productive and responsible in coping with their…

  8. Empowerment Evaluation: A Form of Self-Evaluation.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    Empowerment evaluation is an innovative approach that uses evaluation concepts and techniques to foster improvement and self-determination. Empowerment evaluation employs qualitative and quantitative methodologies. Although it can be applied to individuals and organizations, the usual focus is on programs. The value orientation of empowerment…

  9. American Evaluation Association: Guiding Principles for Evaluators

    ERIC Educational Resources Information Center

    American Journal of Evaluation, 2009

    2009-01-01

    The American Evaluation Association (AEA) strives to promote ethical practice in the evaluation of programs, products, personnel, and policy. This article presents the list of principles which AEA developed to guide evaluators in their professional practice. These principles are: (1) Systematic Inquiry; (2) Competence; (3) Integrity/Honesty; (4)…

  10. The Influence of Evaluators' Principles on Evaluation Resource Decisions

    ERIC Educational Resources Information Center

    Crohn, Kara Shea Davis

    2009-01-01

    This study examines ways in which evaluators' principles influence decisions about evaluation resources. Evaluators must seek-out and allocate (often scarce) resources (e.g., money, time, data, people, places) in a way that allows them to conduct the best possible evaluation given clients' and evaluation participants' constraints. Working within…

  11. The Evaluation Handbook: Guidelines for Evaluating Dropout Prevention Programs.

    ERIC Educational Resources Information Center

    Smink, Jay; Stank, Peg

    This manual, developed in an effort to take the mysticism out of program evaluation, discusses six phases of the program evaluation process. The introduction discusses reasons for evaluation, process and outcome evaluation, the purpose of the handbook, the evaluation process, and the Sequoia United School District Dropout Prevention Program. Phase…

  12. The Use of Multiple Evaluation Approaches in Program Evaluation

    ERIC Educational Resources Information Center

    Bledsoe, Katrina L.; Graham, James A.

    2005-01-01

    The authors discuss the use of multiple evaluation approaches in conducting program evaluations. Specifically, they illustrate four evaluation approaches (theory-driven, consumer-based, empowerment, and inclusive evaluation) and briefly discuss a fifth (use-focused evaluation) as a side effect of the use of the others. The authors also address the…

  13. Culturally Responsive Evaluation Meets Systems-Oriented Evaluation

    ERIC Educational Resources Information Center

    Thomas, Veronica G.; Parsons, Beverly A.

    2017-01-01

    The authors of this article each bring a different theoretical background to their evaluation practice. The first author has a background of attention to culturally responsive evaluation (CRE), while the second author has a background of attention to systems theories and their application to evaluation. Both have had their own evolution of…

  14. Evaluative judgments are based on evaluative information: Evidence against meaning change in evaluative context effects.

    PubMed

    Kaplan, M F

    1975-07-01

    Trait adjectives commonly employed in person perception studies have both evaluative and denotative meanings. Evaluative ratings of single traits shift with variations in the context of other traits ascribed to the stimulus person; the extent to which denotative changes underlie these evaluative context effects has been a theoretical controversy. In the first experiment, it was shown that context effects on quantitative ratings of denotation can be largely accounted for by evaluative halo effects. In the second experiment, increasing the denotative relatedness of context traits to the test trait didnot increase the effect of the context. Only the evaluative meaning of the context affected evaluation of the rated test trait. These studies suggest that the denotative relationship between a test adjective and its context has little influence on context effects in person perception, and that denotative meaning changes do not mediate context effects. Instead, evaluative judgments appear to be based on evaluative meaning.

  15. Improving Beta Test Evaluation Response Rates: A Meta-Evaluation

    ERIC Educational Resources Information Center

    Russ-Eft, Darlene; Preskill, Hallie

    2005-01-01

    This study presents a meta-evaluation of a beta-test of a customer service training program. The initial evaluation showed a low response rate. Therefore, the meta-evaluation focused on issues related to the conduct of the initial evaluation and reasons for nonresponse. The meta-evaluation identified solutions to the nonresponse problem as related…

  16. Collaborative Evaluation within a Framework of Stakeholder-Oriented Evaluation Approaches

    ERIC Educational Resources Information Center

    O'Sullivan, Rita G.

    2012-01-01

    Collaborative Evaluation systematically invites and engages stakeholders in program evaluation planning and implementation. Unlike "distanced" evaluation approaches, which reject stakeholder participation as evaluation team members, Collaborative Evaluation assumes that active, on-going engagement between evaluators and program staff,…

  17. Reinventing Evaluation

    ERIC Educational Resources Information Center

    Hopson, Rodney K.

    2005-01-01

    This commentary reviews "Negotiating Researcher Roles in Ethnographic Program Evaluation" and discusses the changing field of evaluation. It situates postmodern deliberations in evaluation anthropology and ethnoevaluation, two concepts that explore the interdisciplinary merger in evaluation, ethnography, and anthropology. Reflecting on Hymes's…

  18. Evaluating Training.

    ERIC Educational Resources Information Center

    Brethower, Karen S.; Rummler, Geary A.

    1979-01-01

    Presents general systems models (ballistic system, guided system, and adaptive system) and an evaluation matrix to help in examining training evaluation alternatives and in deciding what evaluation is appropriate. Includes some guidelines for conducting evaluation studies using four designs (control group, reversal, multiple baseline, and…

  19. Legislative Evaluation.

    ERIC Educational Resources Information Center

    Fox, Harrison

    The speaker discusses Congressional program evaluation. From the Congressional perspective, good evaluators understand the political, social, and economic processes; are familiar with various evaluation methods; and know how to use authority and power within their roles. Program evaluation serves three major purposes: to anticipate social impact…

  20. Economic evaluation in collaborative hospital drug evaluation reports.

    PubMed

    Ortega, Ana; Fraga, María Dolores; Marín-Gil, Roberto; Lopez-Briz, Eduardo; Puigventós, Francesc; Dranitsaris, George

    2015-09-01

    economic evaluation is a fundamental criterion when deciding a drug's place in therapy. The MADRE method (Method for Assistance in making Decisions and Writing Drug Evaluation Reports) is widely used for drug evaluation. This method was developed by the GENESIS group of the Spanish Society of Hospital Pharmacy (SEFH), including economic evaluation. We intend to improve the economic aspects of this method. As for the direction to take, we have to first analyze our previous experiences with the current methodology and propose necessary improvements. economic evaluation sections in collaboratively conducted drug evaluation reports (as the scientific society, SEFH) with the MADRE method were reviewed retrospectively. thirty-two reports were reviewed, 87.5% of them included an economic evaluation conducted by authors and 65.6% contained published economic evaluations. In 90.6% of the reports, a Budget impact analysis was conducted. The cost per life year gained or per Quality Adjusted Life Year gained was present in 14 reports. Twenty-three reports received public comments regarding the need to improve the economic aspect. Main difficulties: low quality evidence in the target population, no comparative studies with a relevant comparator, non-final outcomes evaluated, no quality of life data, no fixed drug price available, dosing uncertainty, and different prices for the same drug. proposed improvements: incorporating different forms of aid for non-drug costs, survival estimation and adapting published economic evaluations; establishing criteria for drug price selection, decision-making in conditions of uncertainty and poor quality evidence, dose calculation and cost-effectiveness thresholds depending on different situations. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  1. Methods of Product Evaluation. Guide Number 10. Evaluation Guides Series.

    ERIC Educational Resources Information Center

    St. John, Mark

    In this guide the logic of product evaluation is described in a framework that is meant to be general and adaptable to all kinds of evaluations. Evaluators should consider using the logic and methods of product evaluation when (1) the purpose of the evaluation is to aid evaluators in making a decision about purchases; (2) a comprehensive…

  2. Teacher Evaluation.

    ERIC Educational Resources Information Center

    Saif, Philip

    This article examines why teachers should be evaluated, how teacher evaluation is perceived, and how teacher evaluation can be approached, focusing on the improvement of teacher competency rather than defining a teacher as "good" or "bad." Since the primary professional activity of a teacher is teaching, the major concern of teacher evaluation is…

  3. Secondary Evaluations.

    ERIC Educational Resources Information Center

    Cook, Thomas D.

    Secondary evaluations, in which an investigator takes a body of evaluation data collected by a primary evaluation researcher and examines the data to see if the original conclusions about the program correspond with his own, are discussed. The different kinds of secondary evaluations and the advantages and disadvantages of each are pointed out,…

  4. Changing CS Features Alters Evaluative Responses in Evaluative Conditioning

    ERIC Educational Resources Information Center

    Unkelbach, Christian; Stahl, Christoph; Forderer, Sabine

    2012-01-01

    Evaluative conditioning (EC) refers to changes in people's evaluative responses toward initially neutral stimuli (CSs) by mere spatial and temporal contiguity with other positive or negative stimuli (USs). We investigate whether changing CS features from conditioning to evaluation also changes people's evaluative response toward these CSs. We used…

  5. University Evaluations and Different Evaluation Approaches: A Finnish Perspective

    ERIC Educational Resources Information Center

    Liuhanen, Anna-Maija

    2005-01-01

    Evaluation of higher education can be described a species of its own with only few connections with other fields of evaluation. When considering the future developments in higher education evaluation (quality assurance), it is useful to observe its similarities and differences with various evaluation approaches in other than higher education…

  6. Evaluation readiness: improved evaluation planning using a data inventory framework.

    PubMed

    Cohen, A B; Hall, K C; Cohodes, D R

    1985-01-01

    Factors intrinsic to many programs, such as ambiguously stated objectives, inadequately defined performance measures, and incomplete or unreliable databases, often conspire to limit the evaluability of these programs. Current evaluation planning approaches are somewhat constrained in their ability to overcome these obstacles and to achieve full preparedness for evaluation. In this paper, the concept of evaluation readiness is introduced as a complement to other evaluation planning approaches, most notably that of evaluability assessment. The basic products of evaluation readiness--the formal program definition and the data inventory framework--are described, along with a guide for assuring more timely and appropriate evaluation response capability to support the decision making needs of program managers. The utility of evaluation readiness for program planning, as well as for effective management, is also discussed.

  7. Evaluate to Improve: Useful Approaches to Student Evaluation

    ERIC Educational Resources Information Center

    Golding, Clinton; Adam, Lee

    2016-01-01

    Many teachers in higher education use feedback from students to evaluate their teaching, but only some use these evaluations to improve their teaching. One important factor that makes the difference is the teacher's approach to their evaluations. In this article, we identify some useful approaches for improving teaching. We conducted focus groups…

  8. Evaluating Evaluation Systems: Policy Levers and Strategies for Studying Implementation of Educator Evaluation. Policy Snapshot

    ERIC Educational Resources Information Center

    Matlach, Lauren

    2015-01-01

    Evaluation studies can provide feedback on implementation, support continuous improvement, and increase understanding of evaluation systems' impact on teaching and learning. Despite the importance of educator evaluation studies, states often need support to prioritize and fund them. Successful studies require expertise, time, and a shared…

  9. La peur de l'evaluation: evaluation de l'enseignement ou du sujet? (Fear of Evaluation: Evaluating the Teacher or the Subject?)

    ERIC Educational Resources Information Center

    Kosmidou-Hardy, Chryssoula; Marmarinos, Jean

    2001-01-01

    Addresses questions related to the evaluation of teachers, with specific attention to why there is such teacher resistance. Theorizes that it is the teachers' fear of evaluation of their personal identity rather than their professional competence that lies behind their resistance to evaluation. Calls for the use of action research as a basic…

  10. Evaluating Art.

    ERIC Educational Resources Information Center

    BCATA Journal for Art Teachers, 1990

    1990-01-01

    These journal articles examine the issues of evaluation and art education. In (1) "Self Evaluation for Secondary Art Students, Why Bother?" (Margaret Scarr), the article recommends that involving students in assessing their work contributes to learning. (2) "Evaluating for Success" (Arlene Smith) gives practical suggestions for…

  11. Evaluator Training: Content and Topic Valuation in University Evaluation Courses

    ERIC Educational Resources Information Center

    Davies, Randall; MacKay, Kathryn

    2014-01-01

    Quality training opportunities for evaluators will always be important to the evaluation profession. While studies have documented the number of university programs providing evaluation training, additional information is needed concerning what content is being taught in current evaluation courses. This article summarizes the findings of a survey…

  12. Evaluating theory-based evaluation: information, norms, and adherence.

    PubMed

    Jacobs, W Jake; Sisco, Melissa; Hill, Dawn; Malter, Frederic; Figueredo, Aurelio José

    2012-08-01

    Programmatic social interventions attempt to produce appropriate social-norm-guided behavior in an open environment. A marriage of applicable psychological theory, appropriate program evaluation theory, and outcome of evaluations of specific social interventions assures the acquisition of cumulative theory and the production of successful social interventions--the marriage permits us to advance knowledge by making use of both success and failures. We briefly review well-established principles within the field of program evaluation, well-established processes involved in changing social norms and social-norm adherence, the outcome of several program evaluations focusing on smoking prevention, pro-environmental behavior, and rape prevention and, using the principle of learning from our failures, examine why these programs often do not perform as expected. Finally, we discuss the promise of learning from our collective experiences to develop a cumulative science of program evaluation and to improve the performance of extant and future interventions. Copyright © 2012. Published by Elsevier Ltd.

  13. Qualitative Evaluation.

    ERIC Educational Resources Information Center

    Stone, James C., Ed.; James, Raymond A., Ed.

    1981-01-01

    "Qualitative evaluation" is the theme of this issue of the California Journal of Teacher Education. Ralph Tyler states that evaluation is essentially descriptive, and using numbers does not solve basic problems. Martha Elin Vernazza examines the issue of objectivity in history and its implications for evaluation. She posits that the…

  14. Reframing Evaluation: Defining an Indigenous Evaluation Framework

    ERIC Educational Resources Information Center

    LaFrance, Joan; Nichols, Richard

    2008-01-01

    The American Indian Higher Education Consortium (AIHEC), comprising 34 American Indian tribally controlled colleges and universities, has undertaken a comprehensive effort to develop an "Indigenous Framework for Evaluation" that synthesizes Indigenous ways of knowing and Western evaluation practice. To ground the framework, AIHEC engaged…

  15. Evaluating the Healthy Start program. Design development to evaluative assessment.

    PubMed

    Raykovich, K S; McCormick, M C; Howell, E M; Devaney, B L

    1996-09-01

    The national evaluation of the federally funded Healthy Start program involved translating a design for a process and outcomes evaluation and standard maternal and infant data set, both developed prior to the national evaluation contract award, into an evaluation design and client data collection protocol that could be used to evaluate 15 diverse grantees. This article discusses the experience of creating a process and outcomes evaluation design that was both substantively and methodologically appropriate given such issues as the diversity of grantees and their community-based intervention strategies; the process of accessing secondary data sources, including vital records; the quality of client level data submissions; and the need to incorporate both qualitative and quantitative approaches into the evaluation design. The relevance of this experience for the conduct of other field studies of public health interventions is discussed.

  16. Evaluation Strategies in Financial Education: Evaluation with Imperfect Instruments

    ERIC Educational Resources Information Center

    Robinson, Lauren; Dudensing, Rebekka; Granovsky, Nancy L.

    2016-01-01

    Program evaluation often suffers due to time constraints, imperfect instruments, incomplete data, and the need to report standardized metrics. This article about the evaluation process for the Wi$eUp financial education program showcases the difficulties inherent in evaluation and suggests best practices for assessing program effectiveness. We…

  17. Training Evaluation: An Analysis of the Stakeholders' Evaluation Needs

    ERIC Educational Resources Information Center

    Guerci, Marco; Vinante, Marco

    2011-01-01

    Purpose: In recent years, the literature on program evaluation has examined multi-stakeholder evaluation, but training evaluation models and practices have not generally taken this problem into account. The aim of this paper is to fill this gap. Design/methodology/approach: This study identifies intersections between methodologies and approaches…

  18. Does Research on Evaluation Matter? Findings from a Survey of American Evaluation Association Members and Prominent Evaluation Theorists and Scholars

    ERIC Educational Resources Information Center

    Coryn, Chris L. S.; Ozeki, Satoshi; Wilson, Lyssa N.; Greenman, Gregory D., II; Schröter, Daniela C.; Hobson, Kristin A.; Azzam, Tarek; Vo, Anne T.

    2016-01-01

    Research on evaluation theories, methods, and practices has increased considerably in the past decade. Even so, little is known about whether published findings from research on evaluation are read by evaluators and whether such findings influence evaluators' thinking about evaluation or their evaluation practice. To address these questions, and…

  19. The Evaluator's Perspective: Evaluating the State Capacity Building Program.

    ERIC Educational Resources Information Center

    Madey, Doren L.

    A historical antagonism between the advocates of quantitative evaluation methods and the proponents of qualitative evaluation methods has stymied the recognition of the value to be gained by utilizing both methodologies in the same study. The integration of quantitative and qualitative methods within a single evaluation has synergistic effects in…

  20. Evaluating the utility of two gestural discomfort evaluation methods

    PubMed Central

    Son, Minseok; Jung, Jaemoon; Park, Woojin

    2017-01-01

    Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016

  1. Evaluation of Faculty

    PubMed Central

    Aburawi, Elhadi; McLean, Michelle; Shaban, Sami

    2014-01-01

    Objectives: Student evaluation of individual teachers is important in the quality improvement cycle. The aim of this study was to explore medical student and faculty perceptions of teacher evaluation in the light of dwindling participation in online evaluations. Methods: This study was conducted at the United Arab Emirates University College of Medicine & Health Sciences between September 2010 and June 2011. A 21-item questionnaire was used to investigate learner and faculty perceptions of teacher evaluation in terms of purpose, etiquette, confidentiality and outcome on a five-point Likert scale. Results: The questionnaire was completed by 54% of faculty and 23% of students. Faculty and students generally concurred that teachers should be evaluated by students but believed that the purpose of the evaluation should be explained. Despite acknowledging the confidentiality of online evaluation, faculty members were less sure that they would not recognise individual comments. While students perceived that the culture allowed objective evaluation, faculty members were less convinced. Although teachers claimed to take evaluation seriously, with Medical Sciences faculty members in particular indicating that they changed their teaching as a result of feedback, students were unsure whether teachers responded to feedback. Conclusion: Despite agreement on the value of evaluation, differences between faculty and student perceptions emerged in terms of confidentiality and whether evaluation led to improved practice. Educating both teachers and learners regarding the purpose of evaluation as a transparent process for quality improvement is imperative. PMID:25097772

  2. EVALUE : a computer program for evaluating investments in forest products industries

    Treesearch

    Peter J. Ince; Philip H. Steele

    1980-01-01

    EVALUE, a FORTRAN program, was developed to provide a framework for cash flow analysis of investment opportunities. EVALUE was designed to assist researchers in evaluating investment feasibility of new technology or new manufacturing processes. This report serves as user documentation for the EVALUE program. EVALUE is briefly described and notes on preparation of a...

  3. Empowerment evaluation: An approach that has literally altered the landscape of evaluation.

    PubMed

    Donaldson, Stewart I

    2017-08-01

    The quest for credible and actionable evidence to improve decision making, foster improvement, enhance self-determination, and promote social betterment is now a global phenomenon. Evaluation theorists and practitioners alike have responded to and overcome the challenges that limited the effectiveness and usefulness of traditional evaluation approaches primarily focused on seeking rigorous scientific knowledge about social programs and policies. No modern evaluation approach has received a more robust welcome from stakeholders across the globe than empowerment evaluation. Empowerment evaluation has been a leader in the development of stakeholder involvement approaches to evaluation, setting a high bar. In addition, empowerment evaluation's respect for community knowledge and commitment to the people's right to build their own evaluation capacity has influenced the evaluation mainstream, particularly concerning evaluation capacity building. Empowerment evaluation's most significant contributions to the field have been to improving evaluation use and knowledge utilization. Copyright © 2016. Published by Elsevier Ltd.

  4. Linking Project Evaluation and Goals-Based Teacher Evaluation: Evaluating the Accelerated Schools Project in South Carolina.

    ERIC Educational Resources Information Center

    Finnan, Christine; Davis, Sara Calhoun

    This paper describes efforts to design an evaluation system that has as its primary objective helping schools effect positive change through the Accelerated Schools Project. Three characteristics were deemed essential: (1) that the evaluation be useful and meaningful; (2) that it be sensitive to local conditions; and (3) that evaluations of…

  5. The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy

    ERIC Educational Resources Information Center

    Hewitt, Michael P.

    2011-01-01

    The author sought to determine whether self-evaluation instruction had an impact on student self-evaluation, music performance, and self-evaluation accuracy of music performance among middle school instrumentalists. Participants (N = 211) were students at a private middle school located in a metropolitan area of a mid-Atlantic state. Students in…

  6. Affective Evaluations of Exercising: The Role of Automatic-Reflective Evaluation Discrepancy.

    PubMed

    Brand, Ralf; Antoniewicz, Franziska

    2016-12-01

    Sometimes our automatic evaluations do not correspond well with those we can reflect on and articulate. We present a novel approach to the assessment of automatic and reflective affective evaluations of exercising. Based on the assumptions of the associative-propositional processes in evaluation model, we measured participants' automatic evaluations of exercise and then shared this information with them, asked them to reflect on it and rate eventual discrepancy between their reflective evaluation and the assessment of their automatic evaluation. We found that mismatch between self-reported ideal exercise frequency and actual exercise frequency over the previous 14 weeks could be regressed on the discrepancy between a relatively negative automatic and a more positive reflective evaluation. This study illustrates the potential of a dual-process approach to the measurement of evaluative responses and suggests that mistrusting one's negative spontaneous reaction to exercise and asserting a very positive reflective evaluation instead leads to the adoption of inflated exercise goals.

  7. L'evaluation des politiques institutionnelles d'evaluation des apprentissages. Rapport synthese (The Evaluation of Institutional Policies of Evaluation of Learning. Synthesis Report). 2410-0520.

    ERIC Educational Resources Information Center

    Lindfelt, Bengt, Ed.

    In accordance with provincial educational regulations, Quebec's community colleges have adopted "politiques institutionnelles d'evaluation des apprentissages" (PIEA), or institutional policies of the evaluation of learning. This report provides a synthesis of evaluations of the PIEA conducted by the province's Commission on the…

  8. Evaluating Educational Programs.

    ERIC Educational Resources Information Center

    Ball, Samuel

    The activities of Educational Testing Service (ETS) in evaluating educational programs are described. Program evaluations are categorized as needs assessment, formative evaluation, or summative evaluation. Three classic efforts which illustrate the range of ETS' participation are the Pennsylvania Goals Study (1965), the Coleman Report--Equality of…

  9. Evaluer la competence de communication (Evaluating Communicative Competence).

    ERIC Educational Resources Information Center

    Hediard, Marie

    1988-01-01

    The structure of a course designed to teach oral communicative competence is outlined, and the approach to evaluation is discussed. Evaluation includes both a criterion test and a specific oral task that students must accomplish. (MSE)

  10. PM Evaluation Guidelines.

    ERIC Educational Resources Information Center

    Bauch, Jerold P.

    This paper presents guidelines for the evaluation of candidate performance, the basic function of the evaluation component of the Georgia program model for the preparation of elementary school teachers. The three steps in the evaluation procedure are outlined: (1) proficiency module (PM) entry appraisal (pretest); (2) self evaluation and the…

  11. Reconceptualizing Evaluator Roles

    ERIC Educational Resources Information Center

    Skolits, Gary J.; Morrow, Jennifer Ann; Burr, Erin Mehalic

    2009-01-01

    The current evaluation literature tends to conceptualize evaluator roles as a single, overarching orientation toward an evaluation, an orientation largely driven by evaluation methods, models, or stakeholder orientations. Roles identified range from a social transformer or a neutral social scientist to that of an educator or even a power merchant.…

  12. Einstein as Evaluator?

    ERIC Educational Resources Information Center

    Caulley, Darrel N.

    1982-01-01

    Like any other person, Albert Einstein was an informal evaluator, engaged in placing value on various aspects of his life, work, and the world. Based on Einstein's own statements, this paper speculates about what Einstein would have been like as a connoisseur evaluator, a conceptual evaluator, or a responsive evaluator. (Author/BW)

  13. Evaluate Yourself. Evaluation: Research-Based Decision Making Series, Number 9304.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    This document considers both self-examination and external evaluation of gifted and talented education programs. Principles of the self-examination process are offered, noting similarities to external evaluation models. Principles of self-evaluation efforts include the importance of maintaining a nonjudgmental orientation, soliciting views from…

  14. Beyond Evaluation: A Model for Cooperative Evaluation of Internet Resources.

    ERIC Educational Resources Information Center

    Kirkwood, Hal P., Jr.

    1998-01-01

    Presents a status report on Web site evaluation efforts, listing dead, merged, new review, Yahoo! wannabes, subject-specific review, former librarian-managed, and librarian-managed review sites; discusses how sites are evaluated; describes and demonstrates (reviewing company directories) the Marr/Kirkwood evaluation model; and provides an…

  15. Evaluating Theory-Based Evaluation: Information, Norms, and Adherence

    ERIC Educational Resources Information Center

    Jacobs, W. Jake; Sisco, Melissa; Hill, Dawn; Malter, Frederic; Figueredo, Aurelio Jose

    2012-01-01

    Programmatic social interventions attempt to produce appropriate social-norm-guided behavior in an open environment. A marriage of applicable psychological theory, appropriate program evaluation theory, and outcome of evaluations of specific social interventions assures the acquisition of cumulative theory and the production of successful social…

  16. Critical evaluation of international health programs: Reframing global health and evaluation.

    PubMed

    Chi, Chunhuei; Tuepker, Anaïs; Schoon, Rebecca; Núñez Mondaca, Alicia

    2018-04-01

    Striking changes in the funding and implementation of international health programs in recent decades have stimulated debate about the role of communities in deciding which health programs to implement. An important yet neglected piece of that discussion is the need to change norms in program evaluation so that analysis of community ownership, beyond various degrees of "participation," is seen as central to strong evaluation practices. This article challenges mainstream evaluation practices and proposes a framework of Critical Evaluation with 3 levels: upstream evaluation assessing the "who" and "how" of programming decisions; midstream evaluation focusing on the "who" and "how" of selecting program objectives; and downstream evaluation, the focus of current mainstream evaluation, which assesses whether the program achieved its stated objectives. A vital tenet of our framework is that a community possesses the right to determine the path of its health development. A prerequisite of success, regardless of technical outcomes, is that programs must address communities' high priority concerns. Current participatory methods still seldom practice community ownership of program selection because they are vulnerable to funding agencies' predetermined priorities. In addition to critiquing evaluation practices and proposing an alternative framework, we acknowledge likely challenges and propose directions for future research. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Responsive Meta-Evaluation: A Participatory Approach to Enhancing Evaluation Quality

    ERIC Educational Resources Information Center

    Sturges, Keith M.; Howley, Caitlin

    2017-01-01

    In an era of ever-deepening budget cuts and a concomitant demand for substantiated programs, many organizations have elected to conduct internal program evaluations. Internal evaluations offer advantages (e.g., enhanced evaluator program knowledge and ease of data collection) but may confront important challenges, including credibility threats,…

  18. Program Evaluation Resources

    EPA Pesticide Factsheets

    These resources list tools to help you conduct evaluations, find organizations outside of EPA that are useful to evaluators, and find additional guides on how to do evaluations from organizations outside of EPA.

  19. External Evaluation as Contract Work: The Production of Evaluator Identity

    ERIC Educational Resources Information Center

    Sturges, Keith M.

    2014-01-01

    Extracted from a larger study of the educational evaluation profession, this qualitative analysis explores how evaluator identity is shaped with constant reference to political economy, knowledge work, and personal history. Interviews with 24 social scientists who conduct or have conducted evaluations as a major part of their careers examined how…

  20. Using Recommendations in Evaluation: A Decision-Making Framework for Evaluators

    ERIC Educational Resources Information Center

    Iriti, Jennifer E.; Bickel, William E.; Nelson, Catherine Awsumb

    2005-01-01

    Is it appropriate and useful for evaluators to use findings to make recommendations? If so, under what circumstances? How specific should they be? This article presents a decision-making framework for the appropriateness of recommendations in varying contexts. On the basis of reviews of evaluation theory, selected evaluation reports, and feedback…

  1. Evaluating the "Evaluative State": Implications for Research in Higher Education.

    ERIC Educational Resources Information Center

    Dill, David D.

    1998-01-01

    Examines the "evaluative state" that is, public management-based evaluation systems--in the context of experiences in the United Kingdom and New Zealand, and suggests that further research is needed to examine problems in the evaluative state itself, in how market competition impacts upon it, and how academic oligarchies influence the…

  2. Reflections on Evaluation Costs: Direct and Indirect. Evaluation Productivity Project.

    ERIC Educational Resources Information Center

    Alkin, Marvin; Ruskus, Joan A.

    This paper summarizes views on the costs of evaluation developed as part of CSE's Evaluation Productivity Project. In particular, it focuses on ideas about the kinds of costs associated with factors known to effect evaluation utilization. The first section deals with general issues involved in identifying and valuing cost components, particularly…

  3. Student Evaluation of Faculty Physicians: Gender Differences in Teaching Evaluations.

    PubMed

    Morgan, Helen K; Purkiss, Joel A; Porter, Annie C; Lypson, Monica L; Santen, Sally A; Christner, Jennifer G; Grum, Cyril M; Hammoud, Maya M

    2016-05-01

    To investigate whether there is a difference in medical student teaching evaluations for male and female clinical physician faculty. The authors examined all teaching evaluations completed by clinical students at one North American medical school in the surgery, obstetrics and gynecology, pediatrics, and internal medicine clinical rotations from 2008 to 2012. The authors focused on how students rated physician faculty on their "overall quality of teaching" using a 5-point response scale (1 = Poor to 5 = Excellent). Linear mixed-effects models provided estimated mean differences in evaluation outcomes by faculty gender. There were 14,107 teaching evaluations of 965 physician faculty. Of these evaluations, 7688 (54%) were for male physician faculty and 6419 (46%) were for female physician faculty. Female physicians received significantly lower mean evaluation scores in all four rotations. The discrepancy was largest in the surgery rotation (males = 4.23, females = 4.01, p = 0.003). Pediatrics showed the next greatest difference (males = 4.44, females = 4.29, p = 0.009), followed by obstetrics and gynecology (males = 4.38, females = 4.26, p = 0.026), and internal medicine (males = 4.35, females = 4.27, p = 0.043). Female physicians received lower teaching evaluations in all four core clinical rotations. This comprehensive examination adds to the medical literature by illuminating subtle differences in evaluations based on physician gender, and provides further evidence of disparities for women in academic medicine.

  4. Re-Evaluating Course Evaluations: Clarity, Visibility, and Functionality

    ERIC Educational Resources Information Center

    Richardson, Stephanie Jean; Coleman, Darrell; Stephenson, Jill

    2014-01-01

    This article presents an innovative framework that provides a means to understand and reevaluate student course evaluation systems. We present three major concepts vital to course evaluation systems and explain how they inform five evolutionary stages. Additionally, we show how the major stakeholders--students, faculty and administrators--are…

  5. Teacher Evaluations: Do Classroom Observations and Evaluator Training Really Matter?

    ERIC Educational Resources Information Center

    Pies, Sarah J.

    2017-01-01

    The purpose of this study was to determine if the minimum number of observations stated in a district's teacher evaluation plan, observation characteristics described in a district's evaluation plan, and the characteristic of those evaluating teachers had an impact on whether a school would receive a bonus or penalty point for Indiana's A-F…

  6. How To Design a Program Evaluation. Program Evaluation Kit, 3.

    ERIC Educational Resources Information Center

    Fitz-Gibbon, Carol Taylor; Morris, Lynn Lyons

    The evaluation design, which prescribes when and from whom to gather data, is described as significant because it can lend credibility to evaluation results. The objectives of this booklet are to aid the evaluator in: choosing a design; putting it into operation; and analyzing and reporting the data. Examples include both formative and summative…

  7. Broadening the Educational Evaluation Lens with Communicative Evaluation

    ERIC Educational Resources Information Center

    Brooks-LaRaviere, Margaret; Ryan, Katherine; Miron, Luis; Samuels, Maurice

    2009-01-01

    Outcomes-based accountability in the form of test scores and performance indicators are a primary lever for improving student achievement in the current educational landscape. The article presents communicative evaluation as a complementary evaluation approach that may be used along with the primary methods of school accountability to provide a…

  8. Evaluating Evaluations: The Case of Parent Involvement Programs

    ERIC Educational Resources Information Center

    Mattingly, Doreen J.; Prislin, Radmila; McKenzie, Thomas L.; Rodriguez, James L.; Kayzar, Brenda

    2002-01-01

    This article analyzes 41 studies that evaluated K-12 parent involvement programs in order to assess claims that such programs are an effective means of improving student learning. It examines the characteristics of the parent involvement programs, as well as the research design, data, and analytical techniques used in program evaluation. Our…

  9. Development, evaluation, and utility of a peer evaluation form for online teaching.

    PubMed

    Gaskamp, Carol D; Kintner, Eileen

    2014-01-01

    Formative assessment of teaching by peers is an important component of quality improvement for educators. Teaching portfolios submitted for promotion and tenure are expected to include peer evaluations. Faculty resources designed for peer evaluation of classroom teaching are often inadequate for evaluating online teaching. The authors describe development, evaluation, and utility of a new peer evaluation form for formative assessment of online teaching deemed relevant, sound, feasible, and beneficial.

  10. Apprentice Performance Evaluation.

    ERIC Educational Resources Information Center

    Gast, Clyde W.

    The Granite City (Illinois) Steel apprentices are under a performance evaluation from entry to graduation. Federally approved, the program is guided by joint apprenticeship committees whose monthly meetings include performance evaluation from three information sources: journeymen, supervisors, and instructors. Journeymen's evaluations are made…

  11. Evaluating impact of clinical guidelines using a realist evaluation framework.

    PubMed

    Reddy, Sandeep; Wakerman, John; Westhorp, Gill; Herring, Sally

    2015-12-01

    The Remote Primary Health Care Manuals (RPHCM) project team manages the development and publication of clinical protocols and procedures for primary care clinicians practicing in remote Australia. The Central Australian Rural Practitioners Association Standard Treatment Manual, the flagship manual of the RPHCM suite, has been evaluated for accessibility and acceptability in remote clinics three times in its 20-year history. These evaluations did not consider a theory-based framework or a programme theory, resulting in some limitations with the evaluation findings. With the RPHCM having an aim of enabling evidence-based practice in remote clinics and anecdotally reported to do so, testing this empirically for the full suite is vital for both stakeholders and future editions of the RPHCM. The project team utilized a realist evaluation framework to assess how, why and for what the RPHCM were being used by remote practitioners. A theory regarding the circumstances in which the manuals have and have not enabled evidence-based practice in the remote clinical context was tested. The project assessed this theory for all the manuals in the RPHCM suite, across government and aboriginal community-controlled clinics, in three regions of Australia. Implementing a realist evaluation framework to generate robust findings in this context has required innovation in the evaluation design and adaptation by researchers. This article captures the RPHCM team's experience in designing this evaluation. © 2015 John Wiley & Sons, Ltd.

  12. Evaluation Influence: The Evaluation Event and Capital Flow in International Development.

    PubMed

    Bell, David A

    2017-12-01

    Assessing program effectiveness in human development is central to informing foreign aid policy-making and organizational learning. Foreign aid effectiveness discussions have increasingly given attention to the devaluing effects of aid flow volatility. This study reveals that the external evaluation event influences actor behavior, serving as a volatility-constraining tool. A case study of a multidonor aid development mechanism served examining the influence of an evaluation event when considering anticipatory effects. The qualitative component used text and focus group data combined with individual interview data (organizations n = 10, including 26 individuals). Quantitative data included financial information on all 75 capital investments. The integrated theory of influence and model of alternative mechanisms used these components to identify the linkage between the evaluation event and capital flow volatility. Aid approved in the year of the midterm evaluation was disbursed by the mechanism with low capital volatility. Anticipating the evaluation event influenced behavior resulting in an empirical record that program outcomes were enhanced and the mechanism was an improved organization. Formative evaluations in a development program can trigger activity as an interim process. That activity provides for a more robust assessment of ultimate consequence of interest. Anticipating an evaluation can stimulate donor reality testing. The findings inform and strengthen future research on the influence of anticipating an evaluation. Closely examining activities before, during, and shortly after the evaluation event can aid development of other systematic methods to improve understanding this phenomenon, as well as improve donor effectiveness strategies.

  13. Making Sense of Participatory Evaluation: Framing Participatory Evaluation

    ERIC Educational Resources Information Center

    King, Jean A.; Cousins, J. Bradley; Whitmore, Elizabeth

    2007-01-01

    This chapter begins with a commentary by King, a longtime admirer of Cousins and Whitmore, in which she discusses why their 1998 article on participatory evaluation is considered an important contribution to the field. Participatory evaluation was not a new idea in 1998. By the mid-1990s articles, chapters, and books that described evaluations…

  14. Influences on Evaluation Quality

    ERIC Educational Resources Information Center

    Cooksy, Leslie J.; Mark, Melvin M.

    2012-01-01

    Attention to evaluation quality is commonplace, even if sometimes implicit. Drawing on her 2010 Presidential Address to the American Evaluation Association, Leslie Cooksy suggests that evaluation quality depends, at least in part, on the intersection of three factors: (a) evaluator competency, (b) aspects of the evaluation environment or context,…

  15. Evaluation Methods Sourcebook.

    ERIC Educational Resources Information Center

    Love, Arnold J., Ed.

    The chapters commissioned for this book describe key aspects of evaluation methodology as they are practiced in a Canadian context, providing representative illustrations of recent developments in evaluation methodology as it is currently applied. The following chapters are included: (1) "Program Evaluation with Limited Fiscal and Human…

  16. Quarter System Evaluation. Final Evaluation Report 1975-1976.

    ERIC Educational Resources Information Center

    Matuszek, Paula A.; And Others

    This evaluation of the quarter system in Austin, Texas, public schools was designed to assess the impact of changes of calendar, curriculum, and other aspects of high school education. The initial first-year evaluation was intended to gather data that could serve as a baseline for examining the long-term effects of these changes. Data were…

  17. Idea Evaluation: Error in Evaluating Highly Original Ideas

    ERIC Educational Resources Information Center

    Licuanan, Brian F.; Dailey, Lesley R.; Mumford, Michael D.

    2007-01-01

    Idea evaluation is a critical aspect of creative thought. However, a number of errors might occur in the evaluation of new ideas. One error commonly observed is the tendency to underestimate the originality of truly novel ideas. In the present study, an attempt was made to assess whether analysis of the process leading to the idea generation and…

  18. Evaluation Thesaurus. Second Edition.

    ERIC Educational Resources Information Center

    Scriven, Michael

    This thesaurus to the evaluation field is not restricted to educational evaluation or to program evaluation, but also refers to product, personnel, and proposal evaluation, as well as to quality control, the grading of work samples, and to all the other areas in which disciplined evaluation is practiced. It contains many suggestions, procedures,…

  19. Toward Better Research on--and Thinking about--Evaluation Influence, Especially in Multisite Evaluations

    ERIC Educational Resources Information Center

    Mark, Melvin M.

    2011-01-01

    Evaluation is typically carried out with the intention of making a difference in the understandings and actions of stakeholders and decision makers. The author provides a general review of the concepts of evaluation "use," evaluation "influence," and "influence pathways," with connections to multisite evaluations. The study of evaluation influence…

  20. Individualised Qualitative Evaluation.

    ERIC Educational Resources Information Center

    O'Sullivan, Denis

    1987-01-01

    The author discusses student evaluation in relation to adult and continuing education programs offered by the Department of Adult Education, University College, Cork. He highlights the need for a more individualized and interactive approach to evaluation, allowing the student to benefit from qualitative feedback in the process of being evaluated.…

  1. Teacher Evaluation

    ERIC Educational Resources Information Center

    Sayavedra, Melinda

    2014-01-01

    Accredited universities normally include a standard that addresses faculty evaluation. It may contain references to performance criteria and procedures and usually emphasizes the need for faculty evaluations to be systematic, regular, fair, objective and relevant to achieving the goals of the institution. Accredited language programs usually have…

  2. The Practice of Evaluation Research and the Use of Evaluation Results.

    ERIC Educational Resources Information Center

    Van den Berg, G.; Hoeben, W. Th. J. G.

    1984-01-01

    Lack of use of educational evaluation results in the Netherlands was investigated by analyzing 14 curriculum evaluation studies. Results indicated that rational decision making with a technical (empirical) evaluation approach makes utilization of results most likely. Incremental decision making and a conformative approach make utilization least…

  3. Automated evaluation of AIMS images: an approach to minimize evaluation variability

    NASA Astrophysics Data System (ADS)

    Dürr, Arndt C.; Arndt, Martin; Fiebig, Jan; Weiss, Samuel

    2006-05-01

    Defect disposition and qualification with stepper simulating AIMS tools on advanced masks of the 90nm node and below is key to match the customer's expectations for "defect free" masks, i.e. masks containing only non-printing design variations. The recently available AIMS tools allow for a large degree of automated measurements enhancing the throughput of masks and hence reducing cycle time - up to 50 images can be recorded per hour. However, this amount of data still has to be evaluated by hand which is not only time-consuming but also error prone and exhibits a variability depending on the person doing the evaluation which adds to the tool intrinsic variability and decreases the reliability of the evaluation. In this paper we present the results of an MatLAB based algorithm which automatically evaluates AIMS images. We investigate its capabilities regarding throughput, reliability and matching with handmade evaluation for a large variety of dark and clear defects and discuss the limitations of an automated AIMS evaluation algorithm.

  4. Politics in evaluation: Politically responsive evaluation in high stakes environments.

    PubMed

    Azzam, Tarek; Levine, Bret

    2015-12-01

    The role of politics has often been discussed in evaluation theory and practice. The political influence of the situation can have major effects on the evaluation design, approach and methods. Politics also has the potential to influence the decisions made from the evaluation findings. The current study focuses on the influence of the political context on stakeholder decision making. Utilizing a simulation scenario, this study compares stakeholder decision making in high and low stakes evaluation contexts. Findings suggest that high stakes political environments are more likely than low stakes environments to lead to reduced reliance on technically appropriate measures and increased dependence on measures better reflect the broader political environment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. 34 CFR 75.592 - Federal evaluation-satisfying requirement for grantee evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Federal evaluation-satisfying requirement for grantee evaluation. 75.592 Section 75.592 Education Office of the Secretary, Department of Education DIRECT GRANT PROGRAMS What Conditions Must Be Met by a Grantee? Evaluation § 75.592 Federal evaluation—satisfying...

  6. Program Evaluation: The Board Game--An Interactive Learning Tool for Evaluators

    ERIC Educational Resources Information Center

    Febey, Karen; Coyne, Molly

    2007-01-01

    The field of program evaluation lacks interactive teaching tools. To address this pedagogical issue, the authors developed a collaborative learning technique called Program Evaluation: The Board Game. The authors present the game and its development in this practitioner-oriented article. The evaluation board game is an adaptable teaching tool…

  7. 34 CFR 75.592 - Federal evaluation-satisfying requirement for grantee evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 1 2014-07-01 2014-07-01 false Federal evaluation-satisfying requirement for grantee evaluation. 75.592 Section 75.592 Education Office of the Secretary, Department of Education DIRECT GRANT PROGRAMS What Conditions Must Be Met by a Grantee? Evaluation § 75.592 Federal evaluation—satisfying...

  8. Report of the Inter-Organizational Committee on Evaluation. Internal Evaluation Model.

    ERIC Educational Resources Information Center

    White, Roy; Murray, John

    Based upon the premise that school divisions in Manitoba, Canada, should evaluate and improve upon themselves, this evaluation model was developed. The participating personnel and the development of the evaluation model are described. The model has 11 parts: (1) needs assessment; (2) statement of objectives; (3) definition of objectives; (4)…

  9. Evaluation in Extension.

    ERIC Educational Resources Information Center

    Byrn, Darcie; And Others

    The authors have written this manual to aid workers in the Cooperative Extension Service of the United States to be better able to understand and apply the principles and methods of evaluation. The manual contains three sections which cover the nature and place of evaluation in extension work, the evaluation process, and the uses of evaluation…

  10. Reading Evaluation

    ERIC Educational Resources Information Center

    Fagan, W. T.

    1978-01-01

    The Canadian Institute for Research in Behavioral and Social Sciences of Calgary was awarded a contract by the Provincial Government of Alberta to assess student skills and knowledge in reading and written composition. Here evaluation is defined and the use of standardized and criterion referenced tests for evaluating reading performance are…

  11. Self Evaluation of Organizations.

    ERIC Educational Resources Information Center

    Pooley, Richard C.

    Evaluation within human service organizations is defined in terms of accepted evaluation criteria, with reasonable expectations shown and structured into a model of systematic evaluation practice. The evaluation criteria of program effort, performance, adequacy, efficiency and process mechanisms are discussed, along with measurement information…

  12. Presidential Address: Empowerment Evaluation.

    ERIC Educational Resources Information Center

    Fetterman, David

    1994-01-01

    Empowerment evaluation is the use of evaluation concepts and techniques to foster self-determination, focusing on helping people help themselves. This collaborative evaluation approach requires both qualitative and quantitative methodologies. It is a multifaceted approach that can be applied to evaluation in any area. (SLD)

  13. A KARAOKE System Singing Evaluation Method that More Closely Matches Human Evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo

    KARAOKE is a popular amusement for old and young. Many KARAOKE machines have singing evaluation function. However, it is often said that the scores given by KARAOKE machines do not match human evaluation. In this paper a KARAOKE scoring method strongly correlated with human evaluation is proposed. This paper proposes a way to evaluate songs based on the distance between singing pitch and musical scale, employing a vibrato extraction method based on template matching of spectrum. The results show that correlation coefficients between scores given by the proposed system and human evaluation are -0.76∼-0.89.

  14. Evaluating Teachers of Writing.

    ERIC Educational Resources Information Center

    Hult, Christine A., Ed.

    Describing the various forms evaluation can take, this book delineates problems in evaluating writing faculty and sets the stage for reconsidering the entire process to produce a fair, equitable, and appropriate system. The book discusses evaluation through real-life examples: evaluation of writing faculty by literature faculty, student…

  15. Peer Evaluation of Teaching or "Fear" Evaluation: In Search of Compatibility

    ERIC Educational Resources Information Center

    Salih, Abdel Rahman Abdalla

    2013-01-01

    Peer evaluation or review of teaching is one of the factors of quality assurance system at the present time. However, peer evaluation is sometimes approached with trepidation and with the feeling that it may not be fair and free of bias. This paper examines teachers' perceptions of peer evaluation as an enhancement for quality teaching. A…

  16. Adaptation, Evaluation and Inclusion

    ERIC Educational Resources Information Center

    Basson, R.

    2011-01-01

    In this article I reflect on a recent development currently shaping programme evaluation as field, which makes the case for evaluators facilitating evaluation training evaluees to self-evaluate and improve the programmes they teach. Fetterman argues persuasively that the practice was incipient in the field and required formalization and acceptance…

  17. Evaluation of clinical practice guidelines.

    PubMed Central

    Basinski, A S

    1995-01-01

    Compared with the current focus on the development of clinical practice guidelines the effort devoted to their evaluation is meagre. Yet the ultimate success of guidelines depends on routine evaluation. Three types of evaluation are identified: evaluation of guidelines under development and before dissemination and implementation, evaluation of health care programs in which guidelines play a central role, and scientific evaluation, through studies that provide the scientific knowledge base for further evolution of guidelines. Identification of evaluation and program goals, evaluation design and a framework for evaluation planning are discussed. PMID:7489550

  18. Evaluation and communication: using a communication audit to evaluate organizational communication.

    PubMed

    Hogard, Elaine; Ellis, Roger

    2006-04-01

    This article identifies a surprising dearth of studies that explicitly link communication and evaluation at substantive, theoretical, and methodological levels. A three-fold typology of evaluation studies referring to communication is proposed and examples given. The importance of organizational communication in program delivery is stressed and illustrative studies reviewed. It is proposed that organizational communication should be considered in all program evaluations and that this should be approached through communication audit. Communication audits are described with particular reference to established survey questionnaire instruments. Two case studies exemplify the use of such instruments in the evaluation of educational and social programs.

  19. Is evaluative conditioning really resistant to extinction? Evidence for changes in evaluative judgements without changes in evaluative representations.

    PubMed

    Gawronski, Bertram; Gast, Anne; De Houwer, Jan

    2015-01-01

    Evaluative conditioning (EC) is defined as the change in the evaluation of a conditioned stimulus (CS) due to its pairing with a positive or negative unconditioned stimulus (US). Although several individual studies suggest that EC is unaffected by unreinforced presentations of the CS without the US, a recent meta-analysis indicates that EC effects are less pronounced for post-extinction measurements than post-acquisition measurements. The disparity in research findings suggests that extinction of EC may depend on yet unidentified conditions. In an attempt to uncover these conditions, three experiments (N = 784) investigated the influence of unreinforced post-acquisition CS presentations on EC effects resulting from simultaneous versus sequential pairings and pairings with single versus multiple USs. For all four types of CS-US pairings, EC effects on self-reported evaluations were reduced by unreinforced CS presentations, but only when the CSs had been rated after the initial presentation of CS-US pairings. EC effects on an evaluative priming measure remained unaffected by unreinforced CS presentations regardless of whether the CSs had been rated after acquisition. The results suggest that reduced EC effects resulting from unreinforced CS presentations are due to judgement-related processes during the verbal expression of CS evaluations rather than genuine changes in the underlying evaluative representations.

  20. Health services research evaluation principles. Broadening a general framework for evaluating health information technology.

    PubMed

    Sockolow, P S; Crawford, P R; Lehmann, H P

    2012-01-01

    Our forthcoming national experiment in increased health information technology (HIT) adoption funded by the American Recovery and Reinvestment Act of 2009 will require a comprehensive approach to evaluating HIT. The quality of evaluation studies of HIT to date reveals a need for broader evaluation frameworks that limits the generalizability of findings and the depth of lessons learned. Develop an informatics evaluation framework for health information technology (HIT) integrating components of health services research (HSR) evaluation and informatics evaluation to address identified shortcomings in available HIT evaluation frameworks. A systematic literature review updated and expanded the exhaustive review by Ammenwerth and deKeizer (AdK). From retained studies, criteria were elicited and organized into classes within a framework. The resulting Health Information Technology Research-based Evaluation Framework (HITREF) was used to guide clinician satisfaction survey construction, multi-dimensional analysis of data, and interpretation of findings in an evaluation of a vanguard community health care EHR. The updated review identified 128 electronic health record (EHR) evaluation studies and seven evaluation criteria not in AdK: EHR Selection/Development/Training; Patient Privacy Concerns; Unintended Consequences/ Benefits; Functionality; Patient Satisfaction with EHR; Barriers/Facilitators to Adoption; and Patient Satisfaction with Care. HITREF was used productively and was a complete evaluation framework which included all themes that emerged. We can recommend to future EHR evaluators that they consider adding a complete, research-based HIT evaluation framework, such as HITREF, to their evaluation tools suite to monitor HIT challenges as the federal government strives to increase HIT adoption.

  1. Students Evaluation of Faculty

    ERIC Educational Resources Information Center

    Thawabieh, Ahmad M.

    2017-01-01

    This study aimed to investigate how students evaluate their faculty and the effect of gender, expected grade, and college on students' evaluation. The study sample consisted of 5291 students from Tafila Technical University Faculty evaluation scale was used to collect data. The results indicated that student evaluation of faculty was high (mean =…

  2. Issues in evaluation: evaluating assessments of elderly people using a combination of methods.

    PubMed

    McEwan, R T

    1989-02-01

    In evaluating a health service, individuals will give differing accounts of its performance, according to their experiences of the service, and the evaluative perspective they adopt. The value of a service may also change through time, and according to the particular part of the service studied. Traditional health care evaluations have generally not accounted for this variability because of the approaches used. Studies evaluating screening or assessment programmes for the elderly have focused on programme effectiveness and efficiency, using relatively inflexible quantitative methods. Evaluative approaches must reflect the complexity of health service provision, and methods must vary to suit the particular research objective. Under these circumstances, this paper presents the case for the use of multiple triangulation in evaluative research, where differing methods and perspectives are combined in one study. Emphasis is placed on the applications and benefits of subjectivist approaches in evaluation. An example of combined methods is provided in the form of an evaluation of the Newcastle Care Plan for the Elderly.

  3. Evaluating the evaluation of cancer driver genes

    PubMed Central

    Tokheim, Collin J.; Papadopoulos, Nickolas; Kinzler, Kenneth W.; Vogelstein, Bert; Karchin, Rachel

    2016-01-01

    Sequencing has identified millions of somatic mutations in human cancers, but distinguishing cancer driver genes remains a major challenge. Numerous methods have been developed to identify driver genes, but evaluation of the performance of these methods is hindered by the lack of a gold standard, that is, bona fide driver gene mutations. Here, we establish an evaluation framework that can be applied to driver gene prediction methods. We used this framework to compare the performance of eight such methods. One of these methods, described here, incorporated a machine-learning–based ratiometric approach. We show that the driver genes predicted by each of the eight methods vary widely. Moreover, the P values reported by several of the methods were inconsistent with the uniform values expected, thus calling into question the assumptions that were used to generate them. Finally, we evaluated the potential effects of unexplained variability in mutation rates on false-positive driver gene predictions. Our analysis points to the strengths and weaknesses of each of the currently available methods and offers guidance for improving them in the future. PMID:27911828

  4. Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods

    PubMed Central

    2010-01-01

    Background Schools are key settings for health promotion (HP) but the development of suitable approaches for evaluating HP in schools is still a major topic of discussion. This article presents a research protocol of a program developed to evaluate HP. After reviewing HP evaluation issues, the various possible approaches are analyzed and the importance of a realistic evaluation framework and a mixed methods (MM) design are demonstrated. Methods/Design The design is based on a systemic approach to evaluation, taking into account the mechanisms, context and outcomes, as defined in realistic evaluation, adjusted to our own French context using an MM approach. The characteristics of the design are illustrated through the evaluation of a nationwide HP program in French primary schools designed to enhance children's social, emotional and physical health by improving teachers' HP practices and promoting a healthy school environment. An embedded MM design is used in which a qualitative data set plays a supportive, secondary role in a study based primarily on a different quantitative data set. The way the qualitative and quantitative approaches are combined through the entire evaluation framework is detailed. Discussion This study is a contribution towards the development of suitable approaches for evaluating HP programs in schools. The systemic approach of the evaluation carried out in this research is appropriate since it takes account of the limitations of traditional evaluation approaches and considers suggestions made by the HP research community. PMID:20109202

  5. Evaluation of health promotion in schools: a realistic evaluation approach using mixed methods.

    PubMed

    Pommier, Jeanine; Guével, Marie-Renée; Jourdan, Didier

    2010-01-28

    Schools are key settings for health promotion (HP) but the development of suitable approaches for evaluating HP in schools is still a major topic of discussion. This article presents a research protocol of a program developed to evaluate HP. After reviewing HP evaluation issues, the various possible approaches are analyzed and the importance of a realistic evaluation framework and a mixed methods (MM) design are demonstrated. The design is based on a systemic approach to evaluation, taking into account the mechanisms, context and outcomes, as defined in realistic evaluation, adjusted to our own French context using an MM approach. The characteristics of the design are illustrated through the evaluation of a nationwide HP program in French primary schools designed to enhance children's social, emotional and physical health by improving teachers' HP practices and promoting a healthy school environment. An embedded MM design is used in which a qualitative data set plays a supportive, secondary role in a study based primarily on a different quantitative data set. The way the qualitative and quantitative approaches are combined through the entire evaluation framework is detailed. This study is a contribution towards the development of suitable approaches for evaluating HP programs in schools. The systemic approach of the evaluation carried out in this research is appropriate since it takes account of the limitations of traditional evaluation approaches and considers suggestions made by the HP research community.

  6. Evaluation Design, 1978-1979. Local/State Bilingual Education Evaluation.

    ERIC Educational Resources Information Center

    Weibly, Gary; And Others

    The evaluation design of the 1978-79 local/state bilingual education program of Austin Independent School District is presented. The primary focus of the evaluation is the assessment of the objectives in language development and concept development submitted to the Texas Education Agency. A secondary focus is the collection of information related…

  7. Evaluation as Story: The Narrative Quality of Educational Evaluation.

    ERIC Educational Resources Information Center

    Wachtman, Edward L.

    The author presents his opinion that educational evaluation has much similarity to the nonfiction narrative, (defined as a series of events ordered in time), particularly as it relates a current situation to future possibilities. He refers to Stake's statement that evaluation is concerned not only with outcomes but also with antecedents and with…

  8. Evaluating State Principal Evaluation Plans across the United States

    ERIC Educational Resources Information Center

    Fuller, Edward J.; Hollingworth, Liz; Liu, Jing

    2015-01-01

    Recent federal legislation has created strong incentives for states to adopt principal evaluation systems, many of which include new measures of principal effectiveness such as estimates of student growth and changes in school climate. Yet, there has been little research on principal evaluation systems and no state-by-state analysis of the…

  9. Institution Building and Evaluation.

    ERIC Educational Resources Information Center

    Wedemeyer, Charles A.

    Institutional modeling and program evaluation in relation to a correspondence program are discussed. The evaluation process is first considered from the viewpoint that it is an add-on activity, which is largely summative, and is the least desirable type of evaluation. Formative evaluation is next considered as a part of the process of institution…

  10. MINERGY CORPORATION GLASS FURNACE TECHNOLOGY EVALUATION: INNOVATION TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    This report presents performance and economic data for a U.S. Environmental Protection Agency (EPA) Superfund Innovative Technology Evaluation (SITE) Program demonstration of the Minergy Corporation (Minergy) Glass Furnace Technology (GFT). The demonstration evaluated the techno...

  11. Working with External Evaluators

    ERIC Educational Resources Information Center

    Silver, Lauren; Burg, Scott

    2015-01-01

    Hiring an external evaluator is not right for every museum or every project. Evaluations are highly situational, grounded in specific times and places; each one is unique. The museum and the evaluator share equal responsibility in an evaluation's success, so it is worth investing time and effort to ensure that both are clear about the goals,…

  12. Computerization of a preanesthetic evaluation and user satisfaction evaluation.

    PubMed

    Arias, Antonio; Benítez, Sonia; Canosa, Daniela; Borbolla, Damián; Staccia, Gustavo; Plazzotta, Fernando; Casais, Marcela; Michelangelo, Hernán; Luna, Daniel; Bernaldo de Quirós, Fernán Gonzalez

    2010-01-01

    Preanesthetic evaluation purpose is to reduce morbidity and mortality through the review of the patient's medical history, clinical examination, and targeted clinical studies, providing referrals for medical consultations when appropriated. Changes in patient care, standards of health information management and patterns of perioperative care, have resulted in a re-conceptualization of this process where the documentation of patient medical information, the efforts in training and maintaining the integrity of the medical-legal evaluation are areas of concern. The aim of this paper is to describe the design, development, training, and implementation of a computerized preanesthetic evaluation form associated to the evaluation of the user satisfaction with the system. Since the system went live in September 2008 there were 15121 closed structured forms, 60% for ambulatory procedures and 40 % for procedures that required hospital admission. 82% of total closed structured forms had recorded a risk of the procedures of 1-2, according to the American Society of Anesthesiologists classification. The survey indicates a positive general satisfaction of the users with the system.

  13. Practice-centred evaluation and the privileging of care in health information technology evaluation.

    PubMed

    Darking, Mary; Anson, Rachel; Bravo, Ferdinand; Davis, Julie; Flowers, Steve; Gillingham, Emma; Goldberg, Lawrence; Helliwell, Paul; Henwood, Flis; Hudson, Claire; Latimer, Simon; Lowes, Paul; Stirling, Ian

    2014-06-05

    Our contribution, drawn from our experience of the case study provided, is a protocol for practice-centred, participative evaluation of technology in the clinical setting that privileges care. In this context 'practice-centred' evaluation acts as a scalable, coordinating framework for evaluation that recognises health information technology supported care as an achievement that is contingent and ongoing. We argue that if complex programmes of technology-enabled service innovation are understood in terms of their contribution to patient care and supported by participative, capability-building evaluation methodologies, conditions are created for practitioners and patients to realise the potential of technologies and make substantive contributions to the evidence base underpinning health innovation programmes. Electronic Patient Records (EPRs) and telemedicine are positioned by policymakers as health information technologies that are integral to achieving improved clinical outcomes and efficiency savings. However, evaluating the extent to which these aims are met poses distinct evaluation challenges, particularly where clinical and cost outcomes form the sole focus of evaluation design. We propose that a practice-centred approach to evaluation - in which those whose day-to-day care practice is altered (or not) by the introduction of new technologies are placed at the centre of evaluation efforts - can complement and in some instances offer advantages over, outcome-centric evaluation models. We carried out a regional programme of innovation in renal services where a participative approach was taken to the introduction of new technologies, including: a regional EPR system and a system to support video clinics. An 'action learning' approach was taken to procurement, pre-implementation planning, implementation, ongoing development and evaluation. Participants included clinicians, technology specialists, patients and external academic researchers. Whilst undergoing these

  14. Evaluation Use: Results from a Survey of U.S. American Evaluation Association Members

    ERIC Educational Resources Information Center

    Fleischer, Dreolin N.; Christie, Christina A.

    2009-01-01

    This paper presents the results of a cross-sectional survey on evaluation use completed by 1,140 U.S. American Evaluation Association members. This study had three foci: evaluators' current attitudes, perceptions, and experiences related to evaluation use theory and practice, how these data are similar to those reported in a previous study…

  15. 38 CFR 21.6052 - Evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Evaluations. 21.6052... Recipients Evaluation § 21.6052 Evaluations. (a) Scope and nature of evaluation. The scope and nature of the evaluation under this program shall be the same as for an evaluation of the reasonable feasibility of...

  16. [An analysis of residents' self-evaluation and faculty-evaluation in internal medicine standardized residency training program using Milestones evaluation system].

    PubMed

    Zhang, Y; Chu, X T; Zeng, X J; Li, H; Zhang, F C; Zhang, S Y; Shen, T

    2018-06-01

    Objective: To assess the value of internal medicine residency training program at Peking Union Medical College Hospital (PUMCH), and the feasibility of applying revised Milestones evaluation system. Methods: Postgraduate-year-one to four (PGY-1 to PGY-4) residents in PUMCH finished the revised Milestones evaluation scales in September 2017. Residents' self-evaluation and faculty-evaluation scores were calculated. Statistical analysis was conducted on the data. Results: A total of 207 residents were enrolled in this cross-sectional study. Both self and faculty scores showed an increasing trend in senior residents. PGY-1 residents were assessed during their first month of residency with scores of 4 points or higher, suggesting that residents have a high starting level. More strikingly, the mean score in PGY-4 was 7 points or higher, proving the career development of residency training program. There was no statistically significant difference between total self- and faculty-evaluation scores. Evaluation scores of learning ability and communication ability were lower in faculty group ( t =-2.627, -4.279, all P <0.05). The scores in graduate students were lower than those in standardized training residents. Conclusions: The goal of national standardized residency training is to improve the quality of healthcare and residents' career development. The evaluation results would guide curriculum design and emphasize the importance and necessity of multi-level teaching. Self-evaluation contributes to the understanding of training objectives and personal cognition.

  17. ADMS Evaluation Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2018-01-23

    Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.

  18. Bringing Evaluative Learning to Life

    ERIC Educational Resources Information Center

    King, Jean A.

    2008-01-01

    This excerpt from the opening plenary asks evaluators to consider two questions regarding learning and evaluation: (a) How do evaluators know if, how, when, and what people are learning during an evaluation? and (b) In what ways can evaluation be a learning experience? To answer the first question, evaluators can apply the commonplaces of…

  19. Where Local and National Evaluators Meet: Unintended Threats to Ethical Evaluation Practice

    ERIC Educational Resources Information Center

    Rodi, Michael S.; Paget, Kathleen D.

    2007-01-01

    The ethical work of program evaluators is based on a covenant of honesty and transparency among stakeholders. Yet even under the most favorable evaluation conditions, threats to ethical standards exist and muddle that covenant. Unfortunately, ethical issues associated with different evaluation structures and contracting arrangements have received…

  20. Practice-centred evaluation and the privileging of care in health information technology evaluation

    PubMed Central

    2014-01-01

    Background Electronic Patient Records (EPRs) and telemedicine are positioned by policymakers as health information technologies that are integral to achieving improved clinical outcomes and efficiency savings. However, evaluating the extent to which these aims are met poses distinct evaluation challenges, particularly where clinical and cost outcomes form the sole focus of evaluation design. We propose that a practice-centred approach to evaluation - in which those whose day-to-day care practice is altered (or not) by the introduction of new technologies are placed at the centre of evaluation efforts – can complement and in some instances offer advantages over, outcome-centric evaluation models. Methods We carried out a regional programme of innovation in renal services where a participative approach was taken to the introduction of new technologies, including: a regional EPR system and a system to support video clinics. An ‘action learning’ approach was taken to procurement, pre-implementation planning, implementation, ongoing development and evaluation. Participants included clinicians, technology specialists, patients and external academic researchers. Whilst undergoing these activities we asked: how can a practice-centred approach be embedded into evaluation of health information technologies? Discussion Organising EPR and telemedicine evaluation around predetermined outcome measures alone can be impractical given the complex and contingent nature of such projects. It also limits the extent to which unforeseen outcomes and new capabilities are recognised. Such evaluations often fail to improve understanding of ‘when’ and ‘under what conditions’ technology-enabled service improvements are realised, and crucially, how such innovation improves care. Summary Our contribution, drawn from our experience of the case study provided, is a protocol for practice-centred, participative evaluation of technology in the clinical setting that privileges care. In

  1. Evaluation as institution: a contractarian argument for needs-based economic evaluation.

    PubMed

    Rogowski, Wolf H

    2018-06-13

    There is a gap between health economic evaluation methods and the value judgments of coverage decision makers, at least in Germany. Measuring preference satisfaction has been claimed to be inappropriate for allocating health care resources, e.g. because it disregards medical need. The existing methods oriented at medical need have been claimed to disregard non-consequentialist fairness concerns. The aim of this article is to propose a new, contractarian argument for justifying needs-based economic evaluation. It is based on consent rather than maximization of some impersonal unit of value to accommodate the fairness concerns. This conceptual paper draws upon contractarian ethics and constitution economics to show how economic evaluation can be viewed as an institution to overcome societal conflicts in the allocation of scarce health care resources. For this, the problem of allocating scarce health care resources in a society is reconstructed as a social dilemma. Both disadvantaged patients and affluent healthy individuals can be argued to share interests in a societal contract to provide technologies which ameliorate medical need, based on progressive funding. The use of needs-based economic evaluation methods for coverage determination can be interpreted as institutions for conflict resolution as far as they use consented criteria to ensure the social contract's sustainability and avoid implicit rationing or unaffordable contribution rates. This justifies the use of needs-based evaluation methods by Pareto-superiority and consent (rather than by some needs-based value function per se). The view of economic evaluation presented here may help account for fairness concerns in the further development of evaluation methods. This is because it directs the attention away from determining some unit of value to be maximized towards determining those persons who are most likely not to consent and meeting their concerns. Following this direction in methods development is

  2. Navigating Theory and Practice through Evaluation Fieldwork: Experiences of Novice Evaluation Practitioners

    ERIC Educational Resources Information Center

    Chouinard, Jill Anne; Boyce, Ayesha S.; Hicks, Juanita; Jones, Jennie; Long, Justin; Pitts, Robyn; Stockdale, Myrah

    2017-01-01

    To explore the relationship between theory and practice in evaluation, we focus on the perspectives and experiences of student evaluators, as they move from the classroom to an engagement with the social, political, and cultural dynamics of evaluation in the field. Through reflective journals, postcourse interviews, and facilitated group…

  3. Are Online Student Evaluations of Faculty Influenced by the Timing of Evaluations?

    ERIC Educational Resources Information Center

    McNulty, John A.; Gruener, Gregory; Chandrasekhar, Arcot; Espiritu, Baltazar; Hoyt, Amy; Ensminger, David

    2010-01-01

    Student evaluations of faculty are important components of the medical curriculum and faculty development. To improve the effectiveness and timeliness of student evaluations of faculty in the physiology course, we investigated whether evaluations submitted during the course differed from those submitted after completion of the course. A secure…

  4. Is evaluating complementary and alternative medicine equivalent to evaluating the absurd?

    PubMed

    Greasley, Pete

    2010-06-01

    Complementary and alternative therapies such as reflexology and acupuncture have been the subject of numerous evaluations, clinical trials, and systematic reviews, yet the empirical evidence in support of their efficacy remains equivocal. The empirical evaluation of a therapy would normally assume a plausible rationale regarding the mechanism of action. However, examination of the historical background and underlying principles for reflexology, iridology, acupuncture, auricular acupuncture, and some herbal medicines, reveals a rationale founded on the principle of analogical correspondences, which is a common basis for magical thinking and pseudoscientific beliefs such as astrology and chiromancy. Where this is the case, it is suggested that subjecting these therapies to empirical evaluation may be tantamount to evaluating the absurd.

  5. Evaluation matters: lessons learned on the evaluation of surgical teaching.

    PubMed

    Woods, Nicole N

    2011-01-01

    The traditional system of academic promotion and tenure can make it difficult to reward those who excel at surgical teaching. A successful faculty evaluation process can provide the objective measures of teaching performance needed for performance appraisals and promotion decisions. Over the course of two decades, an extensive faculty evaluation process has been developed in the Department of Surgery at the University of Toronto. This paper presents some of the non-psychometric characteristics of that system. Faculty awareness of the evaluation process, the consistency of its application, trainee anonymity and the materiality of the results are described key factors of a faculty evaluation system that meets the assessment needs of individual teachers and raises the profile of teaching in surgical departments. Copyright © 2010 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  6. Building an Evaluative Culture: The Key to Effective Evaluation and Results Management

    ERIC Educational Resources Information Center

    Mayne, John

    2009-01-01

    As many reviews of results-based performance systems have noted, a weak evaluative culture in an organization undermines attempts at building an effective evaluation and/or results management regime. This article sets out what constitutes a strong evaluative culture where information on performance results is deliberately sought in order to learn…

  7. Making Evaluation Work for You: Ideas for Deriving Multiple Benefits from Evaluation

    ERIC Educational Resources Information Center

    Jayaratne, K. S. U.

    2016-01-01

    Increased demand for accountability has forced Extension educators to evaluate their programs and document program impacts. Due to this situation, some Extension educators may view evaluation simply as the task, imposed on them by administrators, of collecting outcome and impact data for accountability. They do not perceive evaluation as a useful…

  8. Evaluating a federated medical search engine: tailoring the methodology and reporting the evaluation outcomes.

    PubMed

    Saparova, D; Belden, J; Williams, J; Richardson, B; Schuster, K

    2014-01-01

    Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants' expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for system improvement.

  9. The effects of non-evaluative feedback on drivers' self-evaluation and performance.

    PubMed

    Dogan, Ebru; Steg, Linda; Delhomme, Patricia; Rothengatter, Talib

    2012-03-01

    Drivers' tend to overestimate their competences, which may result in risk taking behavior. Providing drivers with feedback has been suggested as one of the solutions to overcome drivers' inaccurate self-evaluations. In practice, many tests and driving simulators provide drivers with non-evaluative feedback, which conveys information on the level of performance but not on what caused the performance. Is this type of feedback indeed effective in reducing self-enhancement biases? The current study aimed to investigate the effect of non-evaluative performance feedback on drivers' self-evaluations using a computerized hazard perception test. A between-subjects design was used with one group receiving feedback on performance in the hazard perception test while the other group not receiving any feedback. The results indicated that drivers had a robust self-enhancement bias in their self-evaluations regardless of the presence of performance feedback and that they systematically estimated their performance to be higher than they actually achieved in the test. Furthermore, they devalued the credibility of the test instead of adjusting their self-evaluations in order to cope with the negative feelings following the failure feedback. We discuss the theoretical and practical implications of these counterproductive effects of non-evaluative feedback. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Training evaluation final report

    NASA Technical Reports Server (NTRS)

    Sepulveda, Jose A.

    1992-01-01

    In the area of management training, 'evaluation' refers both to the specific evaluation instrument used to determine whether a training effort was considered effective, and to the procedures followed to evaluate specific training requests. This report recommends to evaluate new training requests in the same way new procurement or new projects are evaluated. This includes examining training requests from the perspective of KSC goals and objectives, and determining expected ROI of proposed training program (does training result in improved productivity, through savings of time, improved outputs, and/or personnel reduction?). To determine whether a specific training course is effective, a statement of what constitutes 'good performance' is required. The user (NOT the Training Branch) must define what is 'required level of performance'. This 'model' will be the basis for the design and development of an objective, performance-based, training evaluation instrument.

  11. The DLESE Evaluation Toolkit Project

    NASA Astrophysics Data System (ADS)

    Buhr, S. M.; Barker, L. J.; Marlino, M.

    2002-12-01

    The Evaluation Toolkit and Community project is a new Digital Library for Earth System Education (DLESE) collection designed to raise awareness of project evaluation within the geoscience education community, and to enable principal investigators, teachers, and evaluators to implement project evaluation more readily. This new resource is grounded in the needs of geoscience educators, and will provide a virtual home for a geoscience education evaluation community. The goals of the project are to 1) provide a robust collection of evaluation resources useful for Earth systems educators, 2) establish a forum and community for evaluation dialogue within DLESE, and 3) disseminate the resources through the DLESE infrastructure and through professional society workshops and proceedings. Collaboration and expertise in education, geoscience and evaluation are necessary if we are to conduct the best possible geoscience education. The Toolkit allows users to engage in evaluation at whichever level best suits their needs, get more evaluation professional development if desired, and access the expertise of other segments of the community. To date, a test web site has been built and populated, initial community feedback from the DLESE and broader community is being garnered, and we have begun to heighten awareness of geoscience education evaluation within our community. The web site contains features that allow users to access professional development about evaluation, search and find evaluation resources, submit resources, find or offer evaluation services, sign up for upcoming workshops, take the user survey, and submit calendar items. The evaluation resource matrix currently contains resources that have met our initial review. The resources are currently organized by type; they will become searchable on multiple dimensions of project type, audience, objectives and evaluation resource type as efforts to develop a collection-specific search engine mature. The peer review

  12. Which Way Is Better for Teacher Evaluation? The Discourse on Teacher Evaluation in Taiwan

    ERIC Educational Resources Information Center

    Wang, Juei-Hsin; Chen, Yen-Ting

    2016-01-01

    There are no summative evaluations for compulsory and basic education in Taiwan. This research discusses and analyzes present teacher evaluation implementation. The implementation of policy nowadays means "Teacher evaluation for professional development". Teacher evaluation for professional development is a voluntary growing project of…

  13. Definitions of Evaluation Use and Misuse, Evaluation Influence, and Factors Affecting Use

    ERIC Educational Resources Information Center

    Alkin, Marvin C.; King, Jean A.

    2017-01-01

    The second article in this series on the history of evaluation use has three sections. The first and longest develops a functional definition of the term "use," noting that a thorough definition of evaluation use includes the initial stimulus (i.e., evaluation findings or process), the user, the way people use the information, the aspect…

  14. Age Differences in Voice Evaluation: From Auditory-Perceptual Evaluation to Social Interactions

    ERIC Educational Resources Information Center

    Lortie, Catherine L.; Deschamps, Isabelle; Guitton, Matthieu J.; Tremblay, Pascale

    2018-01-01

    Purpose: The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial…

  15. Analyzing the School Evaluation Use Process To Make Evaluation Worth the Effort.

    ERIC Educational Resources Information Center

    Pechman, Ellen M.; King, Jean A.

    This paper describes a structure for assessing the school evaluation use process developed from a longitudinal case study of districtwide and school level evaluation procedures in a large urban school district. Two fundamental questions guided the study: (1) Why isn't the evaluation process more useful to decision-makers and practitioners? and (2)…

  16. Relational responsibilities in responsive evaluation.

    PubMed

    Visse, Merel; Abma, Tineke A; Widdershoven, Guy A M

    2012-02-01

    This article explores how we can enhance our understanding of the moral responsibilities in daily, plural practices of responsive evaluation. It introduces an interpretive framework for understanding the moral aspects of evaluation practice. The framework supports responsive evaluators to better understand and handle their moral responsibilities. A case is introduced to illustrate our argument. Responsive evaluation contributes to the design and implementation of policy by working with stakeholders and coordinating the evaluation process as a relationally responsible practice. Responsive evaluation entails a democratic process in which the evaluator fosters and enters a partnership with stakeholders. The responsibilities of an evaluator generally involve issues such as 'confidentiality', 'accountability' and 'privacy'. The responsive evaluator has specific responsibilities, for example to include stakeholders and vulnerable groups and to foster an ongoing dialogue. In addition, responsive evaluation involves a relational responsibility, which becomes present in daily situations in which stakeholders express expectations and voice demands. In our everyday work as evaluators, it is difficult to respond to all these demands at the same time. In addition, this article demonstrates that novice evaluators experience challenges concerning over- and underidenfitication with stakeholders. Guidelines and quality criteria on how to act are helpful, but need interpretation and application to the unique situation at hand. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Clinical Performance Evaluations of Third-Year Medical Students and Association With Student and Evaluator Gender.

    PubMed

    Riese, Alison; Rappaport, Leah; Alverson, Brian; Park, Sangshin; Rockney, Randal M

    2017-06-01

    Clinical performance evaluations are major components of medical school clerkship grades. But are they sufficiently objective? This study aimed to determine whether student and evaluator gender is associated with assessment of overall clinical performance. This was a retrospective analysis of 4,272 core clerkship clinical performance evaluations by 829 evaluators of 155 third-year students, within the Alpert Medical School grading database for the 2013-2014 academic year. Overall clinical performance, assessed on a three-point scale (meets expectations, above expectations, exceptional), was extracted from each evaluation, as well as evaluator gender, age, training level, department, student gender and age, and length of observation time. Hierarchical ordinal regression modeling was conducted to account for clustering of evaluations. Female students were more likely to receive a better grade than males (adjusted odds ratio [AOR] 1.30, 95% confidence interval [CI] 1.13-1.50), and female evaluators awarded lower grades than males (AOR 0.72, 95% CI 0.55-0.93), adjusting for department, observation time, and student and evaluator age. The interaction between student and evaluator gender was significant (P = .03), with female evaluators assigning higher grades to female students, while male evaluators' grading did not differ by student gender. Students who spent a short time with evaluators were also more likely to get a lower grade. A one-year examination of all third-year clerkship clinical performance evaluations at a single institution revealed that male and female evaluators rated male and female students differently, even when accounting for other measured variables.

  18. Evaluating the Impact of HRD.

    ERIC Educational Resources Information Center

    1998

    This document contains four papers from a symposium on evaluating the impact of human resource development (HRD). "The Politics of Program Evaluation and the Misuse of Evaluation Findings" (Hallie Preskill, Robin Lackey) discusses the status of evaluation theory, evaluation as a political activity, and the findings from a survey on the…

  19. The Design, Development, and Evaluation of an Evaluative Computer Simulation.

    ERIC Educational Resources Information Center

    Ehrlich, Lisa R.

    This paper discusses evaluation design considerations for a computer based evaluation simulation developed at the University of Iowa College of Medicine in Cardiology to assess the diagnostic skills of primary care physicians and medical students. The simulation developed allows for the assessment of diagnostic skills of physicians in the…

  20. Evaluation of Visualization Software

    NASA Technical Reports Server (NTRS)

    Globus, Al; Uselton, Sam

    1995-01-01

    Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.

  1. Evaluation of image quality

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.

  2. Evaluating Cross-Cutting Approaches to Chronic Disease Prevention and Management: Developing a Comprehensive Evaluation

    PubMed Central

    Jernigan, Jan; Barnes, Seraphine Pitt; Shea, Pat; Davis, Rachel; Rutledge, Stephanie

    2017-01-01

    We provide an overview of the comprehensive evaluation of State Public Health Actions to Prevent and Control Diabetes, Heart Disease, Obesity and Associated Risk Factors and Promote School Health (State Public Health Actions). State Public Health Actions is a program funded by the Centers for Disease Control and Prevention to support the statewide implementation of cross-cutting approaches to promote health and prevent and control chronic diseases. The evaluation addresses the relevance, quality, and impact of the program by using 4 components: a national evaluation, performance measures, state evaluations, and evaluation technical assistance to states. Challenges of the evaluation included assessing the extent to which the program contributed to changes in the outcomes of interest and the variability in the states’ capacity to conduct evaluations and track performance measures. Given the investment in implementing collaborative approaches at both the state and national level, achieving meaningful findings from the evaluation is critical. PMID:29215974

  3. Evaluating Informal Support.

    ERIC Educational Resources Information Center

    Litwin, Howard; Auslander, Gail K.

    1990-01-01

    Dilemmas inherent in the attempt to measure and evaluate informal supports available to individuals in need of social care are illustrated through a study of 400 elderly persons in Jerusalem. Practical guidelines for evaluation are presented. (SLD)

  4. Language of Evaluation: How PLA Evaluators Write about Student Learning

    ERIC Educational Resources Information Center

    Travers, Nan L.; Smith, Bernard; Ellis, Leslie; Brady, Tom; Feldman, Liza; Hakim, Kameyla; Onta, Bhuwan; Panayotou, Maria; Seamans, Laurie; Treadwell, Amanda

    2011-01-01

    Very few studies (e.g., Arnold, 1998; Joosten-ten Brinke, et al., 2009) have examined the ways in which evaluators assess students' prior learning. This investigation explored the ways that evaluators described students' prior learning in final assessment reports at a single, multiple-location institution. Results found four themes; audience,…

  5. 511 Virginia evaluation

    DOT National Transportation Integrated Search

    2004-01-01

    This document presents the results of the evaluation of the 511 Virginia Advanced Traveler Information System (ATIS), a system that operates on the I-81 corridor in Virginia. The evaluation focused on the Virginia Department of Transportation's (VDOT...

  6. An evaluability assessment of a West Africa based Non-Governmental Organization's (NGO) progressive evaluation strategy.

    PubMed

    D'Ostie-Racine, Léna; Dagenais, Christian; Ridde, Valéry

    2013-02-01

    While program evaluations are increasingly valued by international organizations to inform practices and public policies, actual evaluation use (EU) in such contexts is inconsistent. Moreover, empirical literature on EU in the context of humanitarian Non-Governmental Organizations (NGOs) is very limited. The current article focuses on the evaluability assessment (EA) of a West-Africa based humanitarian NGO's progressive evaluation strategy. Since 2007, the NGO has established an evaluation strategy to inform its maternal and child health care user-fee exemption intervention. Using Wholey's (2004) framework, the current EA enabled us to clarify with the NGO's evaluation partners the intent of their evaluation strategy and to design its program logic model. The EA ascertained the plausibility of the evaluation strategy's objectives, the accessibility of relevant data, and the utility for intended users of evaluating both the evaluation strategy and the conditions that foster EU. Hence, key evaluability conditions for an EU study were assured. This article provides an example of EA procedures when such guidance is scant in the literature. It also offers an opportunity to analyze critically the use of EAs in the context of a humanitarian NGO's collaboration with evaluators and political actors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. The supervisor's performance appraisal: evaluating the evaluator.

    PubMed

    McConnell, C R

    1993-04-01

    The focus of much performance appraisal in the coming decade or so will likely be on the level of customer satisfaction achieved through performance. Ultimately, evaluating the evaluator--that is, appraising the supervisor--will likely become a matter of assessing how well the supervisor's department meets the needs of its customers. Since meeting the needs of one's customers can well become the strongest determinant of organizational success or failure, it follows that relative success in ensuring these needs are met can become the primary indicator of one's relative success as a supervisor. This has the effect of placing the emphasis on supervisory performance exactly at the point it belongs, right on the bottom-line results of the supervisor's efforts.

  8. Who Is Afraid of Evaluation? Ethics in Evaluation Research as a Way to Cope with Excessive Evaluation Anxiety: Insights from a Case Study

    ERIC Educational Resources Information Center

    Bechar, Shlomit; Mero-Jaffe, Irit

    2014-01-01

    In this paper we share our reflections, as evaluators, on an evaluation where we encountered Excessive Evaluation Anxiety (XEA). The signs of XEA which we discerned were particularly evident amongst the program head and staff who were part of a new training program. We present our insights on the evaluation process and its difficulties, as well as…

  9. Evaluation, Language, and Untranslatables

    ERIC Educational Resources Information Center

    Dahler-Larsen, Peter; Abma, Tineke; Bustelo, María; Irimia, Roxana; Kosunen, Sonja; Kravchuk, Iryna; Minina, Elena; Segerholm, Christina; Shiroma, Eneida; Stame, Nicoletta; Tshali, Charlie Kabanga

    2017-01-01

    The issue of translatability is pressing in international evaluation, in global transfer of evaluative instruments, in comparative performance management, and in culturally responsive evaluation. Terms that are never fully understood, digested, or accepted may continue to influence issues, problems, and social interactions in and around and after…

  10. [The evaluation of science].

    PubMed

    Ramiro-H, Manuel; Cruz-A, J Enrique; García-Gómez, Francisco

    2017-01-01

    The evaluation and quantification of scientific activity is an extremely complex task. Evaluation of research results is often difficult, and different indicators for the quantification and evaluation of scientific publications have been identified, as well as the development and success of the training programs of researchers.

  11. Technology Enhanced Teacher Evaluation

    ERIC Educational Resources Information Center

    Teter, Richard B.

    2010-01-01

    The purpose of this research and development study was to design and develop an affordable, computer-based, pre-service teacher assessment and reporting system to allow teacher education institutions and supervising teachers to efficiently enter evaluation criteria, record pre-service teacher evaluations, and generate evaluation reports. The…

  12. Learning while evaluating: the use of an electronic evaluation portfolio in a geriatric medicine clerkship

    PubMed Central

    Duque, Gustavo; Finkelstein, Adam; Roberts, Ayanna; Tabatabai, Diana; Gold, Susan L; Winer, Laura R

    2006-01-01

    Background Electronic evaluation portfolios may play a role in learning and evaluation in clinical settings and may complement other traditional evaluation methods (bedside evaluations, written exams and tutor-led evaluations). Methods 133 third-year medical students used the McGill Electronic Evaluation Portfolio (MEEP) during their one-month clerkship rotation in Geriatric Medicine between September 2002 and September 2003. Students were divided into two groups, one who received an introductory hands-on session about the electronic evaluation portfolio and one who did not. Students' marks in their portfolios were compared between both groups. Additionally, students self-evaluated their performance and received feedback using the electronic portfolio during their mandatory clerkship rotation. Students were surveyed immediately after the rotation and at the end of the clerkship year. Tutors' opinions about this method were surveyed once. Finally, the number of evaluations/month was quantified. In all surveys, Likert scales were used and were analyzed using Chi-square tests and t-tests to assess significant differences in the responses from surveyed subjects. Results The introductory session had a significant effect on students' portfolio marks as well as on their comfort using the system. Both tutors and students reported positive notions about the method. Remarkably, an average (± SD) of 520 (± 70) evaluations/month was recorded with 30 (± 5) evaluations per student/month. Conclusion The MEEP showed a significant and positive effect on both students' self-evaluations and tutors' evaluations involving an important amount of self-reflection and feedback which may complement the more traditional evaluation methods. PMID:16409640

  13. Re-Evaluating Student Evaluation of Teaching: The Teaching Evaluation Form.

    ERIC Educational Resources Information Center

    Wolfer, Terry A.; Johnson, Miriam McNown

    2003-01-01

    Reports on the aggregate analysis of scores generated by a standardized instrument, the Teaching Evaluation Form (TEF; Hudson, 1982), at the College of Social Work, University of South Carolina. Data included more than 11,000 completions of the instrument in 508 class sections offered during a 4-year period. Analysis revealed a severely negatively…

  14. Formative Evaluation of Lectures; An Application of Stake's Evaluation Framework.

    ERIC Educational Resources Information Center

    Westphal, Walter W.; And Others

    The problem of major concern to the Physics Education Evaluation Project (P.E.E.P.) involved the improvement of university physics teaching and learning. The present paper describes instruments and procedures developed for systematic formative evaluation of physics lectures. The data was drawn from two sections of a first year university physics…

  15. An Evaluation Framework and Instrument for Evaluating e-Assessment Tools

    ERIC Educational Resources Information Center

    Singh, Upasana Gitanjali; de Villiers, Mary Ruth

    2017-01-01

    e-Assessment, in the form of tools and systems that deliver and administer multiple choice questions (MCQs), is used increasingly, raising the need for evaluation and validation of such systems. This research uses literature and a series of six empirical action research studies to develop an evaluation framework of categories and criteria called…

  16. Empowerment evaluation: a collaborative approach to evaluating and transforming a medical school curriculum.

    PubMed

    Fetterman, David M; Deitz, Jennifer; Gesundheit, Neil

    2010-05-01

    Medical schools continually evolve their curricula to keep students abreast of advances in basic, translational, and clinical sciences. To provide feedback to educators, critical evaluation of the effectiveness of these curricular changes is necessary. This article describes a method of curriculum evaluation, called "empowerment evaluation," that is new to medical education. It mirrors the increasingly collaborative culture of medical education and offers tools to enhance the faculty's teaching experience and students' learning environments. Empowerment evaluation provides a method for gathering, analyzing, and sharing data about a program and its outcomes and encourages faculty, students, and support personnel to actively participate in system changes. It assumes that the more closely stakeholders are involved in reflecting on evaluation findings, the more likely they are to take ownership of the results and to guide curricular decision making and reform. The steps of empowerment evaluation include collecting evaluation data, designating a "critical friend" to communicate areas of potential improvement, establishing a culture of evidence, encouraging a cycle of reflection and action, cultivating a community of learners, and developing reflective educational practitioners. This article illustrates how stakeholders used the principles of empowerment evaluation to facilitate yearly cycles of improvement at the Stanford University School of Medicine, which implemented a major curriculum reform in 2003-2004. The use of empowerment evaluation concepts and tools fostered greater institutional self-reflection, led to an evidence-based model of decision making, and expanded opportunities for students, faculty, and support staff to work collaboratively to improve and refine the medical school's curriculum.

  17. Empowerment Evaluation: Yesterday, Today, and Tomorrow

    ERIC Educational Resources Information Center

    Fetterman, David; Wandersman, Abraham

    2007-01-01

    Empowerment evaluation continues to crystallize central issues for evaluators and the field of evaluation. A highly attended American Evaluation Association conference panel, titled "Empowerment Evaluation and Traditional Evaluation: 10 Years Later," provided an opportunity to reflect on the evolution of empowerment evaluation. Several…

  18. Grounding Evaluations in Culture

    ERIC Educational Resources Information Center

    Samuels, Maurice; Ryan, Katherine

    2011-01-01

    The emergence of and the attention given to culture in the evaluation field over the last decade has created a heightened awareness of and need for evaluators to understand the complexity and multidimensionality of evaluations within multicultural, multiracial, and cross-cultural contexts. In this article, the authors discuss how cultural…

  19. The Evaluation of Teachers.

    ERIC Educational Resources Information Center

    National Education Association, Washington, DC. Div. of Instruction and Professional Development.

    The several components of this package on the evaluation of teachers and educational programs are designed to help affiliates deal constructively with the subject. The issue of evaluation continues to intensify as state legislatures increasingly mandate that evaluation systems be imposed throughout the state to measure the performance of teachers…

  20. Leniency, Learning, and Evaluations.

    ERIC Educational Resources Information Center

    Palmer, John; And Others

    With student evaluations of instructor effectiveness playing an increasingly important role in the determination of merit pay, promotion, and tenure, there is a growing interest in what these evaluations actually measure. Faculty members frequently voice doubts about using student evaluations, because it is not clear to what extent they measure…

  1. THE ATMOSPHERIC MODEL EVALUATION TOOL

    EPA Science Inventory

    This poster describes a model evaluation tool that is currently being developed and applied for meteorological and air quality model evaluation. The poster outlines the framework and provides examples of statistical evaluations that can be performed with the model evaluation tool...

  2. Evaluating Federal Education Programs.

    ERIC Educational Resources Information Center

    Baker, Eva L., Ed.

    A series of papers was developed by the authors as part of their deliberations as members of the National Research Council's Committee of Program Evaluation in Education. The papers provide a broad range of present evaluative thinking. The conflict between preferences in evaluation methodology comes through in these papers. The selections include:…

  3. Encyclopedia of Educational Evaluation: Concepts and Techniques for Evaluating Education and Training Programs.

    ERIC Educational Resources Information Center

    Anderson, Scarvia B.; And Others

    Arranged like an encyclopedia, this book, addressed to directors and sponsors of education/training programs, as well as evaluators and those studying to become evaluators, unifies and systematizes the field of evaluation by organizing its main concepts and techniques into one volume. Researched and documented articles, contributed by recognized…

  4. Evaluation in Human Resource Development.

    ERIC Educational Resources Information Center

    1999

    These four papers are from a symposium on evaluation in human resource development (HRD). "Assessing Organizational Readiness for Learning through Evaluative Inquiry" (Hallie Preskill, Rosalie T. Torres) reviews how evaluative inquiry can facilitate organizational learning; argues HRD evaluation should be reconceptualized as a process…

  5. Social Studies. Microsift Courseware Evaluations.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This compilation of 17 courseware evaluations gives a general overview of available social studies microcomputer courseware for students in grades 1-12. Each evaluation lists title, date, producer, date of evaluation, evaluating institution, cost, ability level, topic, medium of transfer, required hardware, required software, instructional…

  6. Reevaluating Teaching Evaluations

    ERIC Educational Resources Information Center

    D'Agostino, Susan; Kosegarten, Jay

    2015-01-01

    In this article, the authors propose the use of new terminology when discussing teaching evaluations. Surveys can be considered as providing students an opportunity for "feedback" about teachers, not "evaluations" of teachers. Students, professors, and administrators should not view the surveys as an opportunity to judge a…

  7. Evaluating energy saving system of data centers based on AHP and fuzzy comprehensive evaluation model

    NASA Astrophysics Data System (ADS)

    Jiang, Yingni

    2018-03-01

    Due to the high energy consumption of communication, energy saving of data centers must be enforced. But the lack of evaluation mechanisms has restrained the process on energy saving construction of data centers. In this paper, energy saving evaluation index system of data centers was constructed on the basis of clarifying the influence factors. Based on the evaluation index system, analytical hierarchy process was used to determine the weights of the evaluation indexes. Subsequently, a three-grade fuzzy comprehensive evaluation model was constructed to evaluate the energy saving system of data centers.

  8. 34 CFR 76.592 - Federal evaluation-satisfying requirement for State or subgrantee evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Federal evaluation-satisfying requirement for State or subgrantee evaluation. 76.592 Section 76.592 Education Office of the Secretary, Department of Education STATE-ADMINISTERED PROGRAMS What Conditions Must Be Met by the State and Its Subgrantees? Evaluation § 76.592 Federal...

  9. Self-evaluative emotions and expectations about self-evaluative emotions in health-behaviour change.

    PubMed

    Dijkstra, Arie; Buunk, Abraham P

    2008-03-01

    Engaging in a behaviour that has negative physical consequences is considered to be a threat to the self because it makes the self appear inadequate and non-adaptive. This self-threat is experienced as self-evaluative emotions. The self-threat can be removed by refraining from the unhealthy behaviour. The experience of self-threat influences behaviour because it contributes to expectations about the occurrence of self-evaluative emotions in the case of behaviour change. The results of Study 1, conducted among 503 smokers, showed that self-evaluative emotions were the central predictor of quitting activity during a 7-month period, among measures related to the negative consequences of smoking. The results of Study 2, conducted among 409 smokers, showed that expectations about the self-evaluative emotions that follow quitting smoking predicted quitting activity during a 9-month period and that these expectations partly mediated the relation between self-evaluative emotions and quitting. The results of Study 3, conducted among 255 smokers, showed that information on the negative outcomes of smoking led to quitting activity only when there was room to change self-evaluative outcome expectations. In addition, increases in these expectations predicted quitting activity during a 6-month period. The results suggest that negative self-evaluative emotions are a central motive to change unhealthy behaviour and that self-evaluative outcome expectations govern the behaviour change. The results can be understood within Steele's (1999) Self-affirmation theory.

  10. Social-evaluative versus self-evaluative appearance concerns in Body Dysmorphic Disorder.

    PubMed

    Anson, Martin; Veale, David; de Silva, Padmal

    2012-12-01

    Body Dysmorphic Disorder (BDD) is characterised by significant preoccupation and distress relating to an imagined or slight defect in appearance. Individuals with BDD frequently report marked concerns relating to perceived negative evaluation of their appearance by others, but research specifically investigating such concerns remains limited. This study investigated the extent and nature of appearance-related social-evaluative and self-evaluative concerns in individuals with BDD and healthy controls. BDD participants, in comparison to controls, reported high levels of importance and anxiety associated with perceptions of others' views of their appearance, in addition to their own view. No differences were observed in the level of importance and anxiety associated with their self-view in comparison to others' views. These findings support existing evidence indicating that appearance-related social-evaluative concerns are a central feature of BDD. Cognitive-behavioural treatment implications are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Toward Mixed Method Evaluations of Scientific Visualizations and Design Process as an Evaluation Tool.

    PubMed

    Jackson, Bret; Coffey, Dane; Thorson, Lauren; Schroeder, David; Ellingson, Arin M; Nuckley, David J; Keefe, Daniel F

    2012-10-01

    In this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a new emphasis on employing design processes as practiced in related fields (e.g., graphic design, illustration, architecture) as a formalized mode of evaluation for data visualizations. To motivate this position we describe a series of recent evaluations of scientific visualization interfaces and computer graphics strategies conducted within our research group. Complementing these more traditional evaluations our visualization research group also regularly employs sketching, critique, and other design methods that have been formalized over years of practice in design fields. Our experience has convinced us that these activities are invaluable, often providing much more detailed evaluative feedback about our visualization systems than that obtained via more traditional user studies and the like. We believe that if design-based evaluation methodologies (e.g., ideation, sketching, critique) can be taught and embraced within the visualization community then these may become one of the most effective future strategies for both formative and summative evaluations.

  12. Toward Mixed Method Evaluations of Scientific Visualizations and Design Process as an Evaluation Tool

    PubMed Central

    Jackson, Bret; Coffey, Dane; Thorson, Lauren; Schroeder, David; Ellingson, Arin M.; Nuckley, David J.

    2017-01-01

    In this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a new emphasis on employing design processes as practiced in related fields (e.g., graphic design, illustration, architecture) as a formalized mode of evaluation for data visualizations. To motivate this position we describe a series of recent evaluations of scientific visualization interfaces and computer graphics strategies conducted within our research group. Complementing these more traditional evaluations our visualization research group also regularly employs sketching, critique, and other design methods that have been formalized over years of practice in design fields. Our experience has convinced us that these activities are invaluable, often providing much more detailed evaluative feedback about our visualization systems than that obtained via more traditional user studies and the like. We believe that if design-based evaluation methodologies (e.g., ideation, sketching, critique) can be taught and embraced within the visualization community then these may become one of the most effective future strategies for both formative and summative evaluations. PMID:28944349

  13. Evaluation Thesaurus. Third Edition.

    ERIC Educational Resources Information Center

    Scriven, Michael

    This is a thesaurus of terms used in evaluation. It is not restricted in scope to educational or program evaluation. It refers to product and personnel and proposal evaluation as well as to quality control and the grading of work samples. The text contains practical suggestions and procedures, comments and criticisms, as well as definitions and…

  14. Internal Evaluation, Historically Speaking

    ERIC Educational Resources Information Center

    Mathison, Sandra

    2011-01-01

    The author analyzes the growth and nature of internal evaluation from the 1960s to the present and suggests that internal evaluation has been on the increase because of its perceived importance. Although the 1960s were characterized by a rich intellectual development of evaluation theory and practice, the fiscal conservatism of the 1980s ushered…

  15. First Grade Baseline Evaluation

    ERIC Educational Resources Information Center

    Center for Innovation in Assessment (NJ1), 2013

    2013-01-01

    The First Grade Baseline Evaluation is an optional tool that can be used at the beginning of the school year to help teachers get to know the reading and language skills of each student. The evaluation is composed of seven screenings. Teachers may use the entire evaluation or choose to use those individual screenings that they find most beneficial…

  16. Objective and automated protocols for the evaluation of biomedical search engines using No Title Evaluation protocols.

    PubMed

    Campagne, Fabien

    2008-02-29

    The evaluation of information retrieval techniques has traditionally relied on human judges to determine which documents are relevant to a query and which are not. This protocol is used in the Text Retrieval Evaluation Conference (TREC), organized annually for the past 15 years, to support the unbiased evaluation of novel information retrieval approaches. The TREC Genomics Track has recently been introduced to measure the performance of information retrieval for biomedical applications. We describe two protocols for evaluating biomedical information retrieval techniques without human relevance judgments. We call these protocols No Title Evaluation (NT Evaluation). The first protocol measures performance for focused searches, where only one relevant document exists for each query. The second protocol measures performance for queries expected to have potentially many relevant documents per query (high-recall searches). Both protocols take advantage of the clear separation of titles and abstracts found in Medline. We compare the performance obtained with these evaluation protocols to results obtained by reusing the relevance judgments produced in the 2004 and 2005 TREC Genomics Track and observe significant correlations between performance rankings generated by our approach and TREC. Spearman's correlation coefficients in the range of 0.79-0.92 are observed comparing bpref measured with NT Evaluation or with TREC evaluations. For comparison, coefficients in the range 0.86-0.94 can be observed when evaluating the same set of methods with data from two independent TREC Genomics Track evaluations. We discuss the advantages of NT Evaluation over the TRels and the data fusion evaluation protocols introduced recently. Our results suggest that the NT Evaluation protocols described here could be used to optimize some search engine parameters before human evaluation. Further research is needed to determine if NT Evaluation or variants of these protocols can fully substitute

  17. Evaluation Survivor: How to Outwit, Outplay, and Outlast as an Internal Government Evaluator

    ERIC Educational Resources Information Center

    Kniker, Ted

    2011-01-01

    This chapter describes the author's experience and insights as the chief of evaluation for public diplomacy at the U.S. Department of State, and as a consultant assisting federal agencies in a multitude of evaluation activities relating what helps and hinders internal evaluation functions. The article discusses the challenges faced by the internal…

  18. Language Program Evaluation

    ERIC Educational Resources Information Center

    Norris, John M.

    2016-01-01

    Language program evaluation is a pragmatic mode of inquiry that illuminates the complex nature of language-related interventions of various kinds, the factors that foster or constrain them, and the consequences that ensue. Program evaluation enables a variety of evidence-based decisions and actions, from designing programs and implementing…

  19. Educational Evaluation: Analysis and Responsibility.

    ERIC Educational Resources Information Center

    Apple, Michael W., Ed.; And Others

    This book presents controversial aspects of evaluation and aims at broadening perspectives and insights in the evaluation field. Chapter 1 criticizes modes of evaluation and the basic rationality behind them and focuses on assumptions that have problematic consequences. Chapter 2 introduces concepts of evaluation and examines methods of grading…

  20. 48 CFR 17.206 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation. 17.206 Section... CONTRACT TYPES SPECIAL CONTRACTING METHODS Options 17.206 Evaluation. (a) In awarding the basic contract... officer need not evaluate offers for any option quantities when it is determined that evaluation would not...

  1. The Future of Principal Evaluation

    ERIC Educational Resources Information Center

    Clifford, Matthew; Ross, Steven

    2012-01-01

    The need to improve the quality of principal evaluation systems is long overdue. Although states and districts generally require principal evaluations, research and experience tell that many state and district evaluations do not reflect current standards and practices for principals, and that evaluation is not systematically administered. When…

  2. Evaluation of Science.

    PubMed

    Usmani, Adnan Mahmmood; Meo, Sultan Ayoub

    2011-01-01

    Scientific achievement by publishing a scientific manuscript in a peer reviewed biomedical journal is an important ingredient of research along with a career-enhancing advantages and significant amount of personal satisfaction. The road to evaluate science (research, scientific publications) among scientists often seems complicated. Scientist's career is generally summarized by the number of publications / citations, teaching the undergraduate, graduate and post-doctoral students, writing or reviewing grants and papers, preparing for and organizing meetings, participating in collaborations and conferences, advising colleagues, and serving on editorial boards of scientific journals. Scientists have been sizing up their colleagues since science began. Scientometricians have invented a wide variety of algorithms called science metrics to evaluate science. Many of the science metrics are even unknown to the everyday scientist. Unfortunately, there is no all-in-one metric. Each of them has its own strength, limitation and scope. Some of them are mistakenly applied to evaluate individuals, and each is surrounded by a cloud of variants designed to help them apply across different scientific fields or different career stages [1]. A suitable indicator should be chosen by considering the purpose of the evaluation, and how the results will be used. Scientific Evaluation assists us in: computing the research performance, comparison with peers, forecasting the growth, identifying the excellence in research, citation ranking, finding the influence of research, measuring the productivity, making policy decisions, securing funds for research and spotting trends. Key concepts in science metrics are output and impact. Evaluation of science is traditionally expressed in terms of citation counts. Although most of the science metrics are based on citation counts but two most commonly used are impact factor [2] and h-index [3].

  3. Insight into Evaluation Practice: A Content Analysis of Designs and Methods Used in Evaluation Studies Published in North American Evaluation-Focused Journals

    ERIC Educational Resources Information Center

    Christie, Christina A.; Fleischer, Dreolin Nesbitt

    2010-01-01

    To describe the recent practice of evaluation, specifically method and design choices, the authors performed a content analysis on 117 evaluation studies published in eight North American evaluation-focused journals for a 3-year period (2004-2006). The authors chose this time span because it follows the scientifically based research (SBR)…

  4. Evaluating Motor and Perceptual-Motor Development: Evaluating the Psychomotor Functioning of Infants and Young Children.

    ERIC Educational Resources Information Center

    Cooper, Walter E.

    The author considers the importance of evaluating preschoolers' perceptual motor development, the usefulness of various evaluation techniques, and the specific psychomotor abilities that require evaluation. He quotes researchers to underline the difficulty of choosing appropriate evaluative techniques and to stress the importance of taking…

  5. Golden Is the Sand: Memory and Hope in Evaluation Policy and Evaluation Practice

    ERIC Educational Resources Information Center

    Datta, Lois-ellin

    2009-01-01

    Going from thought to action in influencing evaluation policy is an overdue, untried, and perhaps anxious-making role for the American Evaluation Association. We will need good courage, sustained conversation, and the widest views. The courage is needed in remembering that although this isn't going to be fast or easy, evaluation policies make a…

  6. Informing the Discussion on Evaluator Training: A Look at Evaluators' Course Taking and Professional Practice

    ERIC Educational Resources Information Center

    Christie, Christina A.; Quiñones, Patricia; Fierro, Leslie

    2014-01-01

    This classification study examines evaluators' coursework training as a way of understanding evaluation practice. Data regarding courses that span methods and evaluation topics were collected from evaluation practitioners. Using latent class analysis, we establish four distinct classes of evaluator course-taking patterns: quantitative,…

  7. 21 CFR 900.23 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Evaluation. 900.23 Section 900.23 Food and Drugs... STANDARDS ACT MAMMOGRAPHY States as Certifiers § 900.23 Evaluation. FDA shall evaluate annually the performance of each certification agency. The evaluation shall include the use of performance indicators that...

  8. Evaluation Theory, Models, and Applications

    ERIC Educational Resources Information Center

    Stufflebeam, Daniel L.; Shinkfield, Anthony J.

    2007-01-01

    "Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…

  9. Evaluating Afterschool Programs

    ERIC Educational Resources Information Center

    Little, Priscilla M.

    2014-01-01

    Well-implemented afterschool programs can promote a range of positive learning and developmental outcomes. However, not all research and evaluation studies have shown the benefits of participation, in part because programs and their evaluation were out of sync. This chapter provides practical guidance on how to foster that alignment between…

  10. "Follow Through" Evaluation.

    ERIC Educational Resources Information Center

    Glass, Gene V.; Camilli, Gregory A.

    Two questions are addressed in this document: What is worth knowing about Project Follow Through? and, How should the National Institute of Education (NIE) evaluate the Follow Through program? Discussion of the first question focuses on findings of past Follow Through evaluations, problems associated with the use of experimental design and…

  11. Task-Oriented Evaluation.

    ERIC Educational Resources Information Center

    Kanis, Ira B.

    1992-01-01

    In 1985, participants in the Second International Science Study developed and evaluated hands-on problem-solving activities and gave students the opportunity to demonstrate mastery of science process skills. Six evaluation stations for fifth and sixth graders are presented: Blowing in a Liquid, Compare and Contrast, Electrical Circuit, Hot and…

  12. Conceptualizing Programme Evaluation

    ERIC Educational Resources Information Center

    Hassan, Salochana

    2013-01-01

    The main thrust of this paper deals with the conceptualization of theory-driven evaluation pertaining to a tutor training programme. Conceptualization of evaluation, in this case, is an integration between a conceptualization model as well as a theoretical framework in the form of activity theory. Existing examples of frameworks of programme…

  13. Evaluation and Communication: Using a Communication Audit to Evaluate Organizational Communication

    ERIC Educational Resources Information Center

    Hogard, Elaine; Ellis, Roger

    2006-01-01

    This article identifies a surprising dearth of studies that explicitly link communication and evaluation at substantive, theoretical, and methodological levels. A three-fold typology of evaluation studies referring to communication is proposed and examples given. The importance of organizational communication in program delivery is stressed and…

  14. Organizational Capacity to Do and Use Evaluation: Results of a Pan-Canadian Survey of Evaluators

    ERIC Educational Resources Information Center

    Cousins, J. Bradley; Elliott, Catherine; Amo, Courtney; Bourgeois, Isabelle; Chouinard, Jill; Goh, Swee C.; Lahey, Robert

    2008-01-01

    Despite increasing interest in the integration of evaluative inquiry into organizational functions and culture, the availability of empirical research addressing organizational capacity building to do and use evaluation is limited. This exploratory descriptive survey of internal evaluators in Canada asked about evaluation capacity building in the…

  15. 21 CFR 900.5 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Evaluation. 900.5 Section 900.5 Food and Drugs... STANDARDS ACT MAMMOGRAPHY Accreditation § 900.5 Evaluation. FDA shall evaluate annually the performance of each accreditation body. Such evaluation shall include an assessment of the reports of FDA or State...

  16. Evaluating Web Usability

    ERIC Educational Resources Information Center

    Snider, Jean; Martin, Florence

    2012-01-01

    Web usability focuses on design elements and processes that make web pages easy to use. A website for college students was evaluated for underutilization. One-on-one testing, focus groups, web analytics, peer university review and marketing focus group and demographic data were utilized to conduct usability evaluation. The results indicated that…

  17. Evaluation Systems, Ethics, and Development Evaluation

    ERIC Educational Resources Information Center

    Thomas, Vinod

    2010-01-01

    After some 65 years of international development assistance, it is still difficult to show the effectiveness of aid in ways that are fully convincing. In part, this reflects inadequacies in the evaluation systems of the bilateral, multilateral, and global organizations that provide official development aid. Underlying these weaknesses often are a…

  18. NASA PC software evaluation project

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kuan, Julie C.

    1986-01-01

    The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.

  19. Systematic, Cooperative Evaluation.

    ERIC Educational Resources Information Center

    Nassif, Paula M.

    Evaluation procedures based on a systematic evaluation methodology, decision-maker validity, new measurement and design techniques, low cost, and a high level of cooperation on the part of the school staff were used in the assessment of a public school mathematics program for grades 3-8. The mathematics curriculum was organized into Spirals which…

  20. Evaluating Teachers' Professional Development Initiatives: Towards an Extended Evaluative Framework

    ERIC Educational Resources Information Center

    Merchie, Emmelien; Tuytens, Melissa; Devos, Geert; Vanderlinde, Ruben

    2018-01-01

    Evaluating teachers' professional development initiatives (PDI) is one of the main challenges for the teacher professionalisation field. Although different studies have focused on the effectiveness of PDI, the obtained effects and evaluative methods have been found to be widely divergent. By means of a narrative review, this study provides an…

  1. Aspects of Successful Evaluation Practice at an Established Private Evaluation Firm

    ERIC Educational Resources Information Center

    Brandon, Paul R.; Smith, Nick L.; Hwalek, Melanie

    2011-01-01

    This article is third in a series of exemplary cases under the two current section editors. The first two cases (Brandon, Smith, Trenholm, & Devaney, 2010; Smith, Brandon, Lawton, & Krohn-Ching, 2010) profiled evaluations that had been designated as exemplary by professional associations. The present case profiles an evaluation organization:…

  2. Evaluation For Intelligent Transportation Systems, Evaluation Methodologies

    DOT National Transportation Integrated Search

    1996-03-01

    THE BRIEFING ALSO PRESENTS THOUGHTS ON EVALUATION IN LIGHT OF THE RECENT LAUNCH OF OPERATION TIMESAVER, THE MODEL DEPLOYMENT INITIATIVE FOR FOUR DIFFERENT CITIES, AND THE IMPLICATIONS OF THE RECENT "GOVERNMENT PERFORMANCE AND RESULTS ACT" THAT REQUIR...

  3. Casing Out Evaluation: Expanding Student Interest in Program Evaluation through Case Competitions.

    ERIC Educational Resources Information Center

    Obrecht, Michael; Porteous, Nancy; Haddock, Blair

    1998-01-01

    Describes the authors' experiences in organizing bilingual evaluation case competitions for the National Capital Chapter of the Canadian Evaluation Society for three years. Competition structure, eligibility, judging, contestant recruiting, and preparing cases are outlined. (SLD)

  4. Evaluating Health Information Systems Using Ontologies.

    PubMed

    Eivazzadeh, Shahryar; Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-06-16

    There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems-whether similar or heterogeneous-by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and deployed across European Union

  5. Evaluating Practice-Based Learning.

    PubMed

    Logue, Nancy C

    2017-03-01

    Practice-based learning is an essential aspect of nursing education, and evaluating this form of learning is vital in determining whether students have the competence required to enter nursing practice. However, limited knowledge exists about the influences that shape how competence development is recognized in nursing programs. The purpose of this study was to explore the evaluation of practice-based learning from the students' standpoint. A qualitative design based on institutional ethnography was used to investigate evaluation of practice-based learning with students, preceptors, and faculty in a preceptorship practicum. The findings revealed how work associated with evaluation was organized by coexisting and often disparate influences within a nursing program and the workplaces where learning took place. The implications and recommendations of the inquiry are intended to encourage dialogue and action among stakeholders from education and practice to improve evaluation practices in preparing new members of the nursing profession. [J Nurs Educ. 2017;56(3):131-138.]. Copyright 2017, SLACK Incorporated.

  6. What happened after the evaluation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, C L

    1999-03-12

    An ergonomics program including a ergonomic computer workstation evaluations at a research and development facility was assessed three years after formal implementation. As part of the assessment, 53 employees who had been subjects of computer workstation evaluations were interviewed. The documented reports (ergonomic evaluation forms) of the ergonomic evaluations were used in the process of selecting the interview subjects. The evaluation forms also provided information about the aspects of the computer workstation that were discussed and recommended as part of the evaluation, although the amount of detail and completeness of the forms varied. Although the results were mixed and reflectivemore » of the multivariate psychosocial factors affecting employees working in a large organization, the findings led to recommendations for improvements of the program.« less

  7. Grounds Maintenance Evaluation.

    ERIC Educational Resources Information Center

    Chesapeake Public Schools, VA. Office of Program Evaluation.

    The Grounds Shop of the Chesapeake Public School Division (Virginia) Department of School Plants was evaluated in 1995-96. The goals of the grounds maintenance program are to provide safe and attractive grounds for students, parents, and staff of the school district. The evaluation examined the extent to which these goals are being met by using…

  8. Final Evaluation Report 1976-77. Systemwide Evaluation. Publication Number: 76.70.

    ERIC Educational Resources Information Center

    Austin Independent School District, TX. Office of Research and Evaluation.

    A series of reports describes the activities of the Office of Research and Evaluation and compiles data descriptive of the Austin (Texas) Independent School District. This report describes the system-wide evaluation data for the school year 1976-77, which demonstrate improved performance in the basic skills areas of reading and mathematics,…

  9. Student Evaluation Standards: A Paradigm Shift for the Evaluation of Students

    ERIC Educational Resources Information Center

    Gullickson, Arlen R.

    2005-01-01

    "The Student Evaluation Standards" were born from the frustrations and concerns of many people who want the evaluations to better serve student learning. This paper describes those standards and how they serve student learning. The first part of this paper lays groundwork--context--by providing strong evidence that, at least in the…

  10. Maximizing the Impact of Program Evaluation: A Discrepancy-Based Process for Educational Program Evaluation.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    This paper describes a formative/summative process for educational program evaluation, which is appropriate for higher education programs and is based on M. Provus' Discrepancy Evaluation Model and the principles of instructional design. The Discrepancy Based Methodology for Educational Program Evaluation facilitates systematic and detailed…

  11. TRIM.FaTE Evaluation Report

    EPA Pesticide Factsheets

    The TRIM.FaTE Evaluation Report is composed of three volumes. Volume I presents conceptual, mechanistic, and structural complexity evaluations of various aspects of the model. Volumes II and III present performance evaluation.

  12. The Logic of Evaluative Argument. CSE Monograph Series in Evaluation, 7.

    ERIC Educational Resources Information Center

    House, Ernest R.

    Evaluation is an act of persuasion directed to a specific audience concerning the solution of a problem. The process of evaluation is prescribed by the nature of knowledge--which is generally complex, always uncertain (in varying degrees), and not always propositional--and by the nature of logic, which is always selective. In the process of…

  13. Evaluating Health Information Systems Using Ontologies

    PubMed Central

    Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-01-01

    Background There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. Objectives The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems—whether similar or heterogeneous—by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. Methods On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and

  14. 48 CFR 45.202 - Evaluation procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation procedures. 45... MANAGEMENT GOVERNMENT PROPERTY Solicitation and Evaluation Procedures 45.202 Evaluation procedures. (a) The... evaluation purposes only, a rental equivalent evaluation factor. (b) The contracting officer shall ensure the...

  15. 34 CFR 300.304 - Evaluation procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Educational Placements Evaluations and Reevaluations § 300.304 Evaluation procedures. (a) Notice. The public... conducting the evaluation, the public agency must— (1) Use a variety of assessment tools and strategies to... evaluation procedures. Each public agency must ensure that— (1) Assessments and other evaluation materials...

  16. The Practice of Health Program Evaluation.

    PubMed

    Lewis, Sarah R

    2017-11-01

    The Practice of Health Program Evaluation provides an overview of the evaluation process for public health programs while diving deeper to address select advanced concepts and techniques. The book unfolds evaluation as a three-phased process consisting of identification of evaluation questions, data collection and analysis, and dissemination of results and recommendations. The text covers research design, sampling methods, as well as quantitative and qualitative approaches. Types of evaluation are also discussed, including economic assessment and systems research as relative newcomers. Aspects critical to conducting a successful evaluation regardless of type or research design are emphasized, such as stakeholder engagement, validity and reliability, and adoption of sound recommendations. The book encourages evaluators to document their approach by developing an evaluation plan, a data analysis plan, and a dissemination plan, in order to help build consensus throughout the process. The evaluative text offers a good bird's-eye view of the evaluation process, while offering guidance for evaluation experts on how to navigate political waters and advocate for their findings to help affect change.

  17. Evaluating the Reference Interview: A Theoretical Discussion of the Desirability and Achievability of Evaluation.

    ERIC Educational Resources Information Center

    Smith, Lisa L.

    1991-01-01

    Review and examination of the current literature on reference interview evaluation explores the degree to which such evaluative practices are both desirable and achievable. It is concluded that, if both quantitative and qualitative techniques are appropriately used, accurate mechanisms of evaluation are possible and desirable. (17 references) (LRW)

  18. GIS in Evaluation: Utilizing the Power of Geographic Information Systems to Represent Evaluation Data

    ERIC Educational Resources Information Center

    Azzam, Tarek; Robinson, David

    2013-01-01

    This article provides an introduction to geographic information systems (GIS) and how the technology can be used to enhance evaluation practice. As a tool, GIS enables evaluators to incorporate contextual features (such as accessibility of program sites or community health needs) into evaluation designs and highlights the interactions between…

  19. Case study of evaluations that go beyond clinical outcomes to assess quality improvement diabetes programmes using the Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE).

    PubMed

    Paquette-Warren, Jann; Harris, Stewart B; Naqshbandi Hayward, Mariam; Tompkins, Jordan W

    2016-10-01

    Investments in efforts to reduce the burden of diabetes on patients and health care are critical; however, more evaluation is needed to provide evidence that informs and supports future policies and programmes. The newly developed Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE) incorporates the theoretical concepts needed to facilitate the capture of critical information to guide investments, policy and programmatic decision making. The aim of the study is to assess the applicability and value of DEFINE in comprehensive real-world evaluation. Using a critical and positivist approach, this intrinsic and collective case study retrospectively examines two naturalistic evaluations to demonstrate how DEFINE could be used when conducting real-world comprehensive evaluations in health care settings. The variability between the cases and the evaluation designs are described and aligned to the DEFINE goals, steps and sub-steps. The majority of the theoretical steps of DEFINE were exemplified in both cases, although limited for knowledge translation efforts. Application of DEFINE to evaluate diverse programmes that target various chronic diseases is needed to further test the inclusivity and built-in flexibility of DEFINE and its role in encouraging more comprehensive knowledge translation. This case study shows how DEFINE could be used to structure or guide comprehensive evaluations of programmes and initiatives implemented in health care settings and support scale-up of successful innovations. Future use of the framework will continue to strengthen its value in guiding programme evaluation and informing health policy to reduce the burden of diabetes and other chronic diseases. © 2016 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.

  20. Fear of negative evaluation modulates electrocortical and behavioral responses when anticipating social evaluative feedback

    PubMed Central

    Van der Molen, Melle J. W.; Poppelaars, Eefje S.; Van Hartingsveldt, Caroline T. A.; Harrewijn, Anita; Gunther Moor, Bregtje; Westenberg, P. Michiel

    2014-01-01

    Cognitive models posit that the fear of negative evaluation (FNE) is a hallmark feature of social anxiety. As such, individuals with high FNE may show biased information processing when faced with social evaluation. The aim of the current study was to examine the neural underpinnings of anticipating and processing social-evaluative feedback, and its correlates with FNE. We used a social judgment paradigm in which female participants (N = 31) were asked to indicate whether they believed to be socially accepted or rejected by their peers. Anticipatory attention was indexed by the stimulus preceding negativity (SPN), while the feedback-related negativity and P3 were used to index the processing of social-evaluative feedback. Results provided evidence of an optimism bias in social peer evaluation, as participants more often predicted to be socially accepted than rejected. Participants with high levels of FNE needed more time to provide their judgments about the social-evaluative outcome. While anticipating social-evaluative feedback, SPN amplitudes were larger for anticipated social acceptance than for social rejection feedback. Interestingly, the SPN during anticipated social acceptance was larger in participants with high levels of FNE. None of the feedback-related brain potentials correlated with the FNE. Together, the results provided evidence of biased information processing in individuals with high levels of FNE when anticipating (rather than processing) social-evaluative feedback. The delayed response times in high FNE individuals were interpreted to reflect augmented vigilance imposed by the upcoming social-evaluative threat. Possibly, the SPN constitutes a neural marker of this vigilance in females with higher FNE levels, particularly when anticipating social acceptance feedback. PMID:24478667

  1. Fear of negative evaluation modulates electrocortical and behavioral responses when anticipating social evaluative feedback.

    PubMed

    Van der Molen, Melle J W; Poppelaars, Eefje S; Van Hartingsveldt, Caroline T A; Harrewijn, Anita; Gunther Moor, Bregtje; Westenberg, P Michiel

    2013-01-01

    Cognitive models posit that the fear of negative evaluation (FNE) is a hallmark feature of social anxiety. As such, individuals with high FNE may show biased information processing when faced with social evaluation. The aim of the current study was to examine the neural underpinnings of anticipating and processing social-evaluative feedback, and its correlates with FNE. We used a social judgment paradigm in which female participants (N = 31) were asked to indicate whether they believed to be socially accepted or rejected by their peers. Anticipatory attention was indexed by the stimulus preceding negativity (SPN), while the feedback-related negativity and P3 were used to index the processing of social-evaluative feedback. Results provided evidence of an optimism bias in social peer evaluation, as participants more often predicted to be socially accepted than rejected. Participants with high levels of FNE needed more time to provide their judgments about the social-evaluative outcome. While anticipating social-evaluative feedback, SPN amplitudes were larger for anticipated social acceptance than for social rejection feedback. Interestingly, the SPN during anticipated social acceptance was larger in participants with high levels of FNE. None of the feedback-related brain potentials correlated with the FNE. Together, the results provided evidence of biased information processing in individuals with high levels of FNE when anticipating (rather than processing) social-evaluative feedback. The delayed response times in high FNE individuals were interpreted to reflect augmented vigilance imposed by the upcoming social-evaluative threat. Possibly, the SPN constitutes a neural marker of this vigilance in females with higher FNE levels, particularly when anticipating social acceptance feedback.

  2. The Spiral-Interactive Program Evaluation Model.

    ERIC Educational Resources Information Center

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  3. 48 CFR 215.305 - Proposal evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....305 Proposal evaluation. (a)(2) Past performance evaluation. When a past performance evaluation is... Business Concerns, the evaluation factors shall include the past performance of offerors in complying with requirements of that clause. When a past performance evaluation is required by FAR 15.304, and the solicitation...

  4. 48 CFR 315.305 - Proposal evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... following elements: (1) An explanation of the evaluation process and the role of evaluators throughout the... include, at a minimum, the following elements: (1) A list of recommended technical evaluation panel... that the technical evaluation will have in the award decision. (2) The technical evaluation process...

  5. Evaluating the Evaluators: Comparative Study of High School Newspaper Critique Services.

    ERIC Educational Resources Information Center

    Davis, Nancy

    High school publication staffs depend on national critique services as a major means of evaluation and recognition, but most have no measure of how one critique service compares to the others, because they can afford the entry fee for only one evaluation. Thus, a study was conducted to test the validity of three major national critique…

  6. Fuel Cell Transit Bus Coordination and Evaluation Plan California Fuel Cell Transit Evaluation Team, DRAFT

    DOT National Transportation Integrated Search

    2003-10-29

    The objective of the DOE/NREL evaluation program is to provide comprehensive, unbiased evaluation results of advanced technology vehicle development and operations, evaluation of hydrogen infrastructure development and operation, and descriptions of ...

  7. Examining Teacher Evaluation Validity and Leadership Decision Making within a Standards-Based Evaluation System

    ERIC Educational Resources Information Center

    Kimball, Steven M.; Milanowski, Anthony

    2009-01-01

    Purpose: The article reports on a study of school leader decision making that examined variation in the validity of teacher evaluation ratings in a school district that has implemented a standards-based teacher evaluation system. Research Methods: Applying mixed methods, the study used teacher evaluation ratings and value-added student achievement…

  8. Products for Improving Educational Evaluation. Center for the Study of Evaluation. Fifth Annual Report.

    ERIC Educational Resources Information Center

    Alkin, Marvin C.

    This publication provides background information on the functions and operations of the Center for the Study of Evaluation and reports on such center products as Insructional Objectives Exchange (IOX), CSE Elementary School Test Evaluations, and Evaluation Workshop I. Appendixes include: a summary of center accomplishments; a list of the center's…

  9. Evaluating programs that address ideological issues: ethical and practical considerations for practitioners and evaluators.

    PubMed

    Lieberman, Lisa D; Fagen, Michael C; Neiger, Brad L

    2014-03-01

    There are important practical and ethical considerations for organizations in conducting their own, or commissioning external, evaluations and for both practitioners and evaluators, when assessing programs built on strongly held ideological or philosophical approaches. Assessing whether programs "work" has strong political, financial, and/or moral implications, particularly when expending public dollars, and may challenge objectivity about a particular program or approach. Using a case study of the evaluation of a school-based abstinence-until-marriage program, this article discusses the challenges, lessons learned, and ethical responsibilities regarding decisions about evaluation, specifically associated with ideologically driven programs. Organizations should consider various stakeholders and views associated with their program to help identify potential pitfalls in evaluation. Once identified, the program or agency needs to carefully consider its answers to two key questions: Do they want the answer and are they willing to modify the program? Having decided to evaluate, the choice of evaluator is critical to assuring that ethical principles are maintained and potential skepticism or criticism of findings can be addressed appropriately. The relationship between program and evaluator, including agreements about ownership and eventual publication and/or promotion of data, should be addressed at the outset. Programs and organizations should consider, at the outset, their ethical responsibility when findings are not expected or desired. Ultimately, agencies, organizations, and programs have an ethical responsibility to use their data to provide health promotion programs, whether ideologically founded or not, that appropriately and effectively address the problems they seek to solve.

  10. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2011-07-01 2011-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  11. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2012-07-01 2012-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  12. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2014-07-01 2014-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  13. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2013-07-01 2013-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  14. 38 CFR 21.57 - Extended evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Extended evaluation. 21... Initial and Extended Evaluation § 21.57 Extended evaluation. (a) Purpose. The purpose of an extended... of services. During the extended evaluation, a veteran may be provided: (1) Diagnostic and evaluative...

  15. Institutional Tendencies of Legitimate Evaluation: A Comparison of Finnish and English Higher Education Evaluations

    ERIC Educational Resources Information Center

    Vartiainen, Pirkko

    2005-01-01

    This article analyses institutional evaluations of higher education in England and Finland through the concept of legitimacy. The focus of the article is on the institutional tendencies of legitimacy. This author's hypothesis is that evaluation is legitimate when the evaluation process is of a good quality and accepted both morally and in practice…

  16. The Preparticipation Sports Evaluation.

    PubMed

    Mirabelli, Mark H; Devine, Mathew J; Singh, Jaskaran; Mendoza, Michael

    2015-09-01

    The preparticipation physical evaluation is a commonly requested medical visit for amateur and professional athletes of all ages. The overarching goal is to maximize the health of athletes and their safe participation in sports. Although studies have not found that the preparticipation physical evaluation prevents morbidity and mortality associated with sports, it may detect conditions that predispose the athlete to injury or illness and can provide strategies to prevent injuries. Clearance depends on the outcome of the evaluation and the type of sport (and sometimes position or event) in which the athlete participates. All persons undergoing a preparticipation physical evaluation should be questioned about exertional symptoms, presence of a heart murmur, symptoms of Marfan syndrome, and family history of premature serious cardiac conditions or sudden death. The physical examination should focus on the cardiovascular and musculoskeletal systems. U.S. medical and athletic organizations discourage screening electrocardiography and blood and urine testing in asymptomatic patients. Further evaluation should be considered for persons with heart or lung disease, bleeding disorders, musculoskeletal problems, history of concussion, or other neurologic disorders.

  17. Case study of evaluations that go beyond clinical outcomes to assess quality improvement diabetes programmes using the Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE)

    PubMed Central

    Harris, Stewart B.; Naqshbandi Hayward, Mariam; Tompkins, Jordan W.

    2016-01-01

    Abstract Rationale, aims and objectives Investments in efforts to reduce the burden of diabetes on patients and health care are critical; however, more evaluation is needed to provide evidence that informs and supports future policies and programmes. The newly developed Diabetes Evaluation Framework for Innovative National Evaluations (DEFINE) incorporates the theoretical concepts needed to facilitate the capture of critical information to guide investments, policy and programmatic decision making. The aim of the study is to assess the applicability and value of DEFINE in comprehensive real‐world evaluation. Method Using a critical and positivist approach, this intrinsic and collective case study retrospectively examines two naturalistic evaluations to demonstrate how DEFINE could be used when conducting real‐world comprehensive evaluations in health care settings. Results The variability between the cases and the evaluation designs are described and aligned to the DEFINE goals, steps and sub‐steps. The majority of the theoretical steps of DEFINE were exemplified in both cases, although limited for knowledge translation efforts. Application of DEFINE to evaluate diverse programmes that target various chronic diseases is needed to further test the inclusivity and built‐in flexibility of DEFINE and its role in encouraging more comprehensive knowledge translation. Conclusions This case study shows how DEFINE could be used to structure or guide comprehensive evaluations of programmes and initiatives implemented in health care settings and support scale‐up of successful innovations. Future use of the framework will continue to strengthen its value in guiding programme evaluation and informing health policy to reduce the burden of diabetes and other chronic diseases. PMID:26804339

  18. Using Predictive Evaluation to Design, Evaluate, and Improve Training for Polio Volunteers

    PubMed Central

    Traicoff, Denise A.; Basarab, Dave; Ehrhardt, Derek T.; Brown, Sandi; Celaya, Martin; Jarvis, Dennis; Howze, Elizabeth H.

    2018-01-01

    Background Predictive Evaluation (PE) uses a four-step process to predict results then designs and evaluates a training intervention accordingly. In 2012, the Sustainable Management Development Program (SMDP) at the Centers for Disease Control and Prevention used PE to train Stop Transmission of Polio (STOP) program volunteers. Methods Stakeholders defined specific beliefs and practices that volunteers should demonstrate. These predictions and adult learning practices were used to design a curriculum to train four cohorts. At the end of each workshop, volunteers completed a beliefs survey and wrote goals for intended actions. The goals were analyzed for acceptability based on four PE criteria. The percentage of acceptable goals and the beliefs survey results were used to define the quality of the workshop. A postassignment adoption evaluation was conducted for two cohorts, using an online survey and telephone or in-person structured interviews. The results were compared with the end of workshop findings. Results The percentage of acceptable goals across the four cohorts ranged from 49% to 85%. In the adoption evaluation of two cohorts, 88% and 94% of respondents reported achieving or making significant progress toward their goal. A comparison of beliefs survey responses across the four cohorts indicated consistencies in beliefs that aligned with stakeholders’ predictions. Conclusions Goal statements that participants write at the end of a workshop provide data to evaluate training quality. Beliefs surveys surface attitudes that could help or hinder workplace performance. The PE approach provides an innovative framework for health worker training and evaluation that emphasizes performance. PMID:29457126

  19. Using Evaluability Assessment to Improve Program Evaluation for the Blue-Throated Macaw Environmental Education Project in Bolivia

    ERIC Educational Resources Information Center

    Salvatierra da Silva, Daniela; Jacobson, Susan K.; Monroe, Martha C.; Israel, Glenn D.

    2016-01-01

    An evaluability assessment of a program to save a critically endangered bird helped prepare the Blue-throated Macaw Environmental Education Project for evaluation and program improvement. The evaluability assessment facilitated agreement among key stakeholders on evaluation criteria and intended uses of evaluation information in order to maximize…

  20. Evaluation Plan for ORBIS

    DOT National Transportation Integrated Search

    1974-03-01

    This report contains the evaluation plan and experimental design for determining the effectiveness and usability of ORBIS, a proprietary device for automatically detecting and recording speeding motorists. The experimental evaluation will be conducte...

  1. The Role of Evaluation and Plans for Evaluating the Current Testing Program.

    ERIC Educational Resources Information Center

    Winters, Lynn

    The Palos Verdes Peninsula Unified School District Office of Program Evaluation and Research is responsible for providing information for program development and improvement; providing test information to special programs coordinators; and acting as a clearinghouse for all information concerning tests, evaluation methodology, and educational…

  2. Advancing Evaluation of Character Building Programs

    ERIC Educational Resources Information Center

    Urban, Jennifer Brown; Trochim, William M.

    2017-01-01

    This article presents how character development practitioners, researchers, and funders might think about evaluation, how evaluation fits into their work, and what needs to happen in order to sustain evaluative practices. A broader view of evaluation is presented whereby evaluation is not just seen as something that is applied at a program level,…

  3. Nearshore Survey System Evaluation

    DTIC Science & Technology

    2017-12-01

    ER D C/ CH L TR -1 7- 19 Coastal Field Data Collection Program Nearshore Survey System Evaluation Co as ta l a nd H yd ra ul ic s La...Collection Program ERDC/CHL TR-17-19 December 2017 Nearshore Survey System Evaluation Michael F. Forte, William A. Birkemeier, and J. Robert Mitchell...No. 462585 ERDC/CHL TR-17-19 ii Abstract This report evaluates the accuracy of two systems for surveying the beach and nearshore zone. The 10

  4. Student Evaluations of Teaching Are an Inadequate Assessment Tool for Evaluating Faculty Performance

    ERIC Educational Resources Information Center

    Hornstein, Henry A.

    2017-01-01

    Literature is examined to support the contention that student evaluations of teaching (SET) should not be used for summative evaluation of university faculty. Recommendations for alternatives to SET are provided.

  5. Assessing Vital Signs: Applying Two Participatory Evaluation Frameworks to the Evaluation of a College of Nursing

    ERIC Educational Resources Information Center

    Connors, Susan C.; Magilvy, Joan K.

    2011-01-01

    Evaluation research has been in progress to clarify the concept of participatory evaluation and to assess its impact. Recently, two theoretical frameworks have been offered--Daigneault and Jacob's participatory evaluation measurement index and Champagne and Smits' model of practical participatory evaluation. In this case report, we apply these…

  6. Unintended outcomes evaluation approach: A plausible way to evaluate unintended outcomes of social development programmes.

    PubMed

    Jabeen, Sumera

    2018-06-01

    Social development programmes are deliberate attempts to bring about change and unintended outcomes can be considered as inherent to any such intervention. There is now a solid consensus among the international evaluation community regarding the need to consider unintended outcomes as a key aspect in any evaluative study. However, this concern often equates to nothing more than false piety. Exiting evaluation theory suffers from overlap of terminology, inadequate categorisation of unintended outcomes and lack of guidance on how to study them. To advance the knowledge of evaluation theory, methods and practice, the author has developed an evaluation approach to study unintended effects using a theory building, testing and refinement process. A comprehensive classification of unintended outcomes on the basis of knowability, value, distribution and temporality helped specify various type of unintended outcomes for programme evaluation. Corresponding to this classification, a three-step evaluation process was proposed including a) outlining programme intentions b) forecasting likely unintended effects c) mapping the anticipated and understanding unanticipated unintended outcomes. This unintended outcomes evaluation approach (UOEA) was then trialled by undertaking a multi-site and multi-method case study of a poverty alleviation programme in Pakistan and refinements were made to the approach.The case study revealed that this programme was producing a number of unintended effects, mostly negative, affecting those already disadvantaged such as the poorest, women and children. The trialling process demonstrated the effectiveness of the UOEA and suggests that this can serve as a useful guide for future evaluation practice. It also provides the discipline of evaluation with an empirically-based reference point for further theoretical developments in the study of unintended outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Evaluating Social Programs at the State and Local Level. The JTPA Evaluation Design Project.

    ERIC Educational Resources Information Center

    Blalock, Ann Bonar, Ed.; And Others

    This book on evaluating social programs is an outcome of the Job Training Partnership Act (JTPA) Evaluation Design Project, which produced a set of 10 guides for the evaluation of state and local JTPA programs. This book distills ideas from these guides and applies them to a larger context. Part 1 presents a general approach to program evaluation…

  8. ITS evaluations and activities.

    DOT National Transportation Integrated Search

    2010-01-01

    Evaluations are critical in promoting innovative practices which may help to provide a safe and efficient transportation system. By conducting such evaluations, we develop quantifiable measures for those in policy development to appreciate and compre...

  9. Students' Evaluations about Climate Change

    ERIC Educational Resources Information Center

    Lombardi, Doug; Brandt, Carol B.; Bickel, Elliot S.; Burg, Colin

    2016-01-01

    Scientists regularly evaluate alternative explanations of phenomena and solutions to problems. Students should similarly engage in critical evaluation when learning about scientific and engineering topics. However, students do not often demonstrate sophisticated evaluation skills in the classroom. The purpose of the present study was to…

  10. Evaluating School Facilities in Brazil

    ERIC Educational Resources Information Center

    Ornstein, Sheila Walbe; Moreira, Nanci Saraiva

    2008-01-01

    Brazil's Sao Paulo Metropolitan Region is conducting a performance evaluation pilot study at three schools serving disadvantaged populations. The objective is first to test methods which can facilitate Post Occupancy Evaluations (POEs) and then to carry out the evaluations. The preliminary results are provided below.

  11. Prospects for Public Library Evaluation.

    ERIC Educational Resources Information Center

    Van House, Nancy A.; Childers, Thomas

    1991-01-01

    Discusses methods of evaluation that can be used to measure public library effectiveness, based on a conference sponsored by the Council on Library Resources. Topics discussed include the Public Library Effectiveness Study (PLES), quantitative and qualitative evaluation, using evaluative information for resource acquisition and resource…

  12. 42 CFR 431.424 - Evaluation requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...

  13. 42 CFR 431.424 - Evaluation requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...

  14. 42 CFR 431.424 - Evaluation requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... evaluations. Demonstration evaluations will include the following: (1) Quantitative research methods. (i... of appropriate evaluation strategies (including experimental and other quantitative and qualitative... demonstration. (ii) CMS will consider alternative evaluation designs when quantitative designs are technically...

  15. Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model

    ERIC Educational Resources Information Center

    Duman, Serap Nur; Akbas, Oktay

    2017-01-01

    This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…

  16. Evaluation of Prismo imprint.

    DOT National Transportation Integrated Search

    2009-05-01

    The purpose of this research project is to evaluate the constructability and performance of Prismo Imprint synthetic overlays (manufacturer later changed to Ennis Paint, Inc.). Imprint is being evaluated as an alternative to brick pavers. This produc...

  17. Statewide transit evaluation : Michigan

    DOT National Transportation Integrated Search

    1981-07-01

    The objective of this report is to share the experience gained during the : development of a performance evaluation methodology for public transportation in : the State of Michigan. The report documents the process through which an : evaluation metho...

  18. 28 CFR 90.58 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Evaluation. 90.58 Section 90.58 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Indian Tribal Governments Discretionary Program § 90.58 Evaluation. The National Institute of Justice will conduct an evaluation of these programs. ...

  19. Structuring an Internal Evaluation Process.

    ERIC Educational Resources Information Center

    Gordon, Sheila C.; Heinemann, Harry N.

    1980-01-01

    The design of an internal program evaluation system requires (1) formulation of program, operational, and institutional objectives; (2) establishment of evaluation criteria; (3) choice of data collection and evaluation techniques; (4) analysis of results; and (5) integration of the system into the mainstream of operations. (SK)

  20. 28 CFR 90.58 - Evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Evaluation. 90.58 Section 90.58 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Indian Tribal Governments Discretionary Program § 90.58 Evaluation. The National Institute of Justice will conduct an evaluation of these programs. ...

  1. 28 CFR 90.58 - Evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Evaluation. 90.58 Section 90.58 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Indian Tribal Governments Discretionary Program § 90.58 Evaluation. The National Institute of Justice will conduct an evaluation of these programs. ...

  2. 28 CFR 90.58 - Evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Evaluation. 90.58 Section 90.58 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Indian Tribal Governments Discretionary Program § 90.58 Evaluation. The National Institute of Justice will conduct an evaluation of these programs. ...

  3. Evaluation Theory Tree Re-Examined

    ERIC Educational Resources Information Center

    Christie, Christina A.; Alkin, Marvin C.

    2008-01-01

    When examining various evaluation prescriptive theories comparatively, we find it helpful to have a framework showing how they are related that highlights features that distinguish theoretical perspectives, thus a "theory" about theories. The evaluation theory tree that we presented in Alkin's recent book, "Evaluation Roots"…

  4. Evaluating a Federated Medical Search Engine

    PubMed Central

    Belden, J.; Williams, J.; Richardson, B.; Schuster, K.

    2014-01-01

    Summary Background Federated medical search engines are health information systems that provide a single access point to different types of information. Their efficiency as clinical decision support tools has been demonstrated through numerous evaluations. Despite their rigor, very few of these studies report holistic evaluations of medical search engines and even fewer base their evaluations on existing evaluation frameworks. Objectives To evaluate a federated medical search engine, MedSocket, for its potential net benefits in an established clinical setting. Methods This study applied the Human, Organization, and Technology (HOT-fit) evaluation framework in order to evaluate MedSocket. The hierarchical structure of the HOT-factors allowed for identification of a combination of efficiency metrics. Human fit was evaluated through user satisfaction and patterns of system use; technology fit was evaluated through the measurements of time-on-task and the accuracy of the found answers; and organization fit was evaluated from the perspective of system fit to the existing organizational structure. Results Evaluations produced mixed results and suggested several opportunities for system improvement. On average, participants were satisfied with MedSocket searches and confident in the accuracy of retrieved answers. However, MedSocket did not meet participants’ expectations in terms of download speed, access to information, and relevance of the search results. These mixed results made it necessary to conclude that in the case of MedSocket, technology fit had a significant influence on the human and organization fit. Hence, improving technological capabilities of the system is critical before its net benefits can become noticeable. Conclusions The HOT-fit evaluation framework was instrumental in tailoring the methodology for conducting a comprehensive evaluation of the search engine. Such multidimensional evaluation of the search engine resulted in recommendations for

  5. Evaluation of masonry coatings.

    DOT National Transportation Integrated Search

    1969-08-01

    This report describes the evaluation of five coating systems to replace the conventional Class 2 rubbed finish now required on concrete structures. The evaluation consisted of preparing test specimens with each of the five coatings and conducting abs...

  6. Escalator Design Features Evaluation

    DOT National Transportation Integrated Search

    1982-05-01

    This study provides an evaluation of the effectiveness of several special design features associated with escalators in rail transit systems. The objective of the study was to evaluate the effectiveness of three escalator design features: (1) mat ope...

  7. Test techniques for evaluating flight displays

    NASA Technical Reports Server (NTRS)

    Haworth, Loran A.; Newman, Richard L.

    1993-01-01

    The rapid development of graphics technology allows for greater flexibility in aircraft displays, but display evaluation techniques have not kept pace. Historically, display evaluation has been based on subjective opinion and not on the actual aircraft/pilot performance. Existing electronic display specifications and evaluation techniques are reviewed. A display rating technique analogous to handling qualities ratings was developed and is recommended for future evaluations. The choice of evaluation pilots is also discussed and the use of a limited number of trained evaluators is recommended over the use of a large number of operational pilots.

  8. Evaluating Technology Transfer and Diffusion.

    ERIC Educational Resources Information Center

    Bozeman, Barry; And Others

    1988-01-01

    Four articles discuss the evaluation of technology transfer and diffusion: (1) "Technology Transfer at the U.S. National Laboratories: A Framework for Evaluation"; (2) "Application of Social Psychological and Evaluation Research: Lessons from Energy Information Programs"; (3) "Technology and Knowledge Transfer in Energy R and D Laboratories: An…

  9. 28 CFR 90.65 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Evaluation. 90.65 Section 90.65 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Arrest Policies in Domestic Violence Cases § 90.65 Evaluation. (a) The National Institute of Justice will conduct evaluations and studies of...

  10. Collaborative Relationships in Evaluation Consulting

    ERIC Educational Resources Information Center

    Maack, Stephen C.; Upton, Jan

    2006-01-01

    People are often driven to become "independent" as part of the desire to go out on their own. Independent evaluation consultants, however, frequently collaborate with others on evaluation projects. This chapter explores such collaborative relationships from both sides: those leading evaluations with subcontracted consultants and those who work as…

  11. Handbook in Evaluating with Photography.

    ERIC Educational Resources Information Center

    Templin, Patricia A.

    This handbook is intended to help educational evaluators use still photography in designing, conducting, and reporting evaluations of educational programs. It describes techniques for using a visual documentary approach to program evaluation that features data collected with a camera. The emphasis is on the aspects of educational evaluation…

  12. An Evaluation Framework for CALL

    ERIC Educational Resources Information Center

    McMurry, Benjamin L.; Williams, David Dwayne; Rich, Peter J.; Hartshorn, K. James

    2016-01-01

    Searching prestigious Computer-assisted Language Learning (CALL) journals for references to key publications and authors in the field of evaluation yields a short list. The "American Journal of Evaluation"--the flagship journal of the American Evaluation Association--is only cited once in both the "CALICO Journal and Language…

  13. 28 CFR 90.65 - Evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Evaluation. 90.65 Section 90.65 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) VIOLENCE AGAINST WOMEN Arrest Policies in Domestic Violence Cases § 90.65 Evaluation. (a) The National Institute of Justice will conduct evaluations and studies of...

  14. End User Evaluations

    NASA Astrophysics Data System (ADS)

    Jay, Caroline; Lunn, Darren; Michailidou, Eleni

    As new technologies emerge, and Web sites become increasingly sophisticated, ensuring they remain accessible to disabled and small-screen users is a major challenge. While guidelines and automated evaluation tools are useful for informing some aspects of Web site design, numerous studies have demonstrated that they provide no guarantee that the site is genuinely accessible. The only reliable way to evaluate the accessibility of a site is to study the intended users interacting with it. This chapter outlines the processes that can be used throughout the design life cycle to ensure Web accessibility, describing their strengths and weaknesses, and discussing the practical and ethical considerations that they entail. The chapter also considers an important emerging trend in user evaluations: combining data from studies of “standard” Web use with data describing existing accessibility issues, to drive accessibility solutions forward.

  15. The Use of Information Based Evaluation in Evaluating the Diagnostic Teaching Center.

    ERIC Educational Resources Information Center

    Poteet, James A.

    Information Based Evaluation (IBE) is identified as a design procedure for assessing a variety of projects, programs, and educational changes. IBE was used to evaluate a Comprehensive Diagnostic Teaching Center (DTC) which, in addition to providing teacher training and services to handicapped pupils, would bring together and focus all of the…

  16. The Evaluation Exchange: Emerging Strategies in Evaluating Child and Family Services, 2001.

    ERIC Educational Resources Information Center

    Goodyear, Leslie, Ed.; Bohan-Baker, Marielle, Ed.

    2001-01-01

    This document is comprised of the two 2001 issues of a newsletter of the Harvard Family Research Project, designed to share new ideas and experiences in evaluating systems reform and comprehensive child and family services. The first issue focuses on strategic communications and efforts of nonprofit agencies to evaluate strategic communication…

  17. Evaluating group purchasing organizations.

    PubMed

    Kaldor, Dennis C; Kowalski, Jamie C; Tankersley, Mark A

    2003-01-01

    A formal evaluation process can help healthcare organizations assess the current and/or potential value of a group purchasing organization (GPO). Healthcare organizations should approach a GPO evaluation as if they were entering into a new relationship. The evaluation should include purchasing and financial services, value-added services, and corporate relations/business practices. Healthcare organizations should consider the potential economies of scale and other services offered by a GPO. Healthcare organizations should consider using acceptable substitutes for products currently used or seeking products through alternative sources if doing so achieves greater value.

  18. Clinical Evaluation of Tinnitus.

    PubMed

    Hertzano, Ronna; Teplitzky, Taylor B; Eisenman, David J

    2016-05-01

    The clinical evaluation of patients with tinnitus differs based on whether the tinnitus is subjective or objective. Subjective tinnitus is usually associated with a hearing loss, and therefore, the clinical evaluation is focused on an otologic and audiologic evaluation with adjunct imaging/tests as necessary. Objective tinnitus is divided into perception of an abnormal somatosound or abnormal perception of a normal somatosound. The distinction between these categories is usually possible based on a history, physical examination, and audiogram, leading to directed imaging to identify the underlying abnormality. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Learning Self-Evaluation: Challenges for Students.

    ERIC Educational Resources Information Center

    MacGregor, Jean

    1993-01-01

    Self-evaluation is unfamiliar to most college students. Teachers can use varied approaches to support students in overcoming unfamiliarity with self-evaluation, lack of confidence in describing learning, writing difficulties, evaluation difficulties, discomfort discussing academic problems, cultural bias against self-evaluation, emotional…

  20. Handbook for Improving Superintendent Performance Evaluation.

    ERIC Educational Resources Information Center

    Candoli, Carl; And Others

    This handbook for superintendent performance evaluation contains information for boards of education as they institute or improve their evaluation system. The handbook answers questions involved in operationalizing, implementing, and evaluating a superintendent-evaluation system. The information was developed from research on superintendent…

  1. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  2. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  3. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  4. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  5. 48 CFR 436.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  6. Methodology of ETV Formative Evaluation.

    ERIC Educational Resources Information Center

    Duby, Aliza

    Formative evaluation is discussed as it relates to educational television (ETV). Formative evaluation, used to improve a project during its development, is especially valuable in the production of television programs. Methods useful in the formative evaluation of ETV are: (1) try-out studies; (2) learner verification and revision; (3)…

  7. Individualized Learning Course Evaluation Guidelines.

    ERIC Educational Resources Information Center

    Bauer, Barbara T.; Everett, Robert L.

    These guidelines provide standards for evaluators to estimate the quality of courses being considered for use in the Individualized Learning Center at Bell Telephone Laboratories. There are three parts. Part I guides the course evaluator through the evaluation of course materials, including course design and structure implementation. Part II is a…

  8. The Evaluation of Experiential Learning: Guidelines for Evaluators Concerning Graduate Student Evaluation. Revised.

    ERIC Educational Resources Information Center

    Central Michigan Univ., Mount Pleasant. Inst. for Personal and Career Development.

    Central Michigan University's experiential learning program and the guidelines for awarding academic credit are described. Graduate students may apply for college credit on the basis of relevant skills which have been acquired as a result of their occupational, military, and personal experiences. The evaluators judge the student's application to…

  9. Rural Principals and the North Carolina Teacher Evaluation Process: How Has the Transition from the TPAI-R to the New Evaluation Process Changed Principals' Evaluative Practices?

    ERIC Educational Resources Information Center

    Fuller, Charles Avery

    2016-01-01

    Beginning with the 2010-2011 school year the North Carolina State Board of Education (SBE) mandated the use of the North Carolina Teacher Evaluation Process (Evaluation Process) for use in all public school systems in the state to conduct teacher observations and evaluations. The Evaluation Process replaced the Teacher Performance Appraisal…

  10. 48 CFR 25.504 - Evaluation Examples.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Evaluation Examples. 25... PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504 Evaluation Examples. The following examples illustrate the application of the evaluation procedures in 25.502 and 25.503. The...

  11. 7 CFR 210.29 - Management evaluations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 4 2014-01-01 2014-01-01 false Management evaluations. 210.29 Section 210.29... AGRICULTURE CHILD NUTRITION PROGRAMS NATIONAL SCHOOL LUNCH PROGRAM Additional Provisions § 210.29 Management evaluations. (a) Management evaluations. FNS will conduct a comprehensive management evaluation of each State...

  12. The Evaluation of Teaching and Learning.

    ERIC Educational Resources Information Center

    Potocki-Malicet, Danielle; Holmesland, Icara; Estrela, Maria-Teresa; Veiga-Simao, Ana-Margarida

    1999-01-01

    Three different approaches to the evaluation of higher education in European and Scandinavian countries are examined: evaluation of a single discipline across institutions (Finland, Norway, Portugal, Spain, United Kingdom, northern Germany); evaluation of several disciplines within certain institutions (France, Germany); and internal evaluation of…

  13. Social Studies. MicroSIFT Courseware Evaluations.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    This compilation of 11 courseware evaluations gives a general overview of available social studies microcomputer courseware for students in grades 3-12. Each evaluation lists title, date, producer, date of evaluation, evaluating institution, cost, ability level, topic, medium of transfer, required hardware, required software, instructional…

  14. 48 CFR 225.504 - Evaluation examples.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Evaluation examples. 225..., DEPARTMENT OF DEFENSE SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 225.504 Evaluation examples. For examples that illustrate the evaluation procedures in 225.502(c)(ii...

  15. 7 CFR 210.29 - Management evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Management evaluations. 210.29 Section 210.29... AGRICULTURE CHILD NUTRITION PROGRAMS NATIONAL SCHOOL LUNCH PROGRAM Additional Provisions § 210.29 Management evaluations. (a) Management evaluations. FNS will conduct a comprehensive management evaluation of each State...

  16. Fear of negative evaluation biases social evaluation inference: evidence from a probabilistic learning task.

    PubMed

    Button, Katherine S; Kounali, Daphne; Stapinski, Lexine; Rapee, Ronald M; Lewis, Glyn; Munafò, Marcus R

    2015-01-01

    Fear of negative evaluation (FNE) defines social anxiety yet the process of inferring social evaluation, and its potential role in maintaining social anxiety, is poorly understood. We developed an instrumental learning task to model social evaluation learning, predicting that FNE would specifically bias learning about the self but not others. During six test blocks (3 self-referential, 3 other-referential), participants (n = 100) met six personas and selected a word from a positive/negative pair to finish their social evaluation sentences "I think [you are / George is]…". Feedback contingencies corresponded to 3 rules, liked, neutral and disliked, with P[positive word correct] = 0.8, 0.5 and 0.2, respectively. As FNE increased participants selected fewer positive words (β = -0.4, 95% CI -0.7, -0.2, p = 0.001), which was strongest in the self-referential condition (FNE × condition 0.28, 95% CI 0.01, 0.54, p = 0.04), and the neutral and dislike rules (FNE × condition × rule, p = 0.07). At low FNE the proportion of positive words selected for self-neutral and self-disliked greatly exceeded the feedback contingency, indicating poor learning, which improved as FNE increased. FNE is associated with differences in processing social-evaluative information specifically about the self. At low FNE this manifests as insensitivity to learning negative self-referential evaluation. High FNE individuals are equally sensitive to learning positive or negative evaluation, which although objectively more accurate, may have detrimental effects on mental health.

  17. Cyber-Evaluation: Evaluating a Distance Learning Program.

    ERIC Educational Resources Information Center

    Henderson, Denise L.

    This paper examines how the process of soliciting evaluation feedback from nonresident students in the Army Management Staff College (Virginia) program on leadership and management for civilian employees of the Army has evolved since 1995. Course design is briefly described, including the use of video-teleconferences, chat rooms, an electronic…

  18. Formative Evaluation of ETV Programmes.

    ERIC Educational Resources Information Center

    Duby, Aliza

    A process of formative evaluation which considers moral, scientific, and social values as its criteria and which is conducted by project staff is proposed for the evaluation of educational television (ETV) programs produced by the South African Broadcasting Corporation. The theoretical framework of ETV evaluation is outlined in the first section,…

  19. Evaluation of Two ESP Textbooks

    ERIC Educational Resources Information Center

    Al Fraidan, Abdullah

    2012-01-01

    This paper evaluated two ESP textbooks using the evaluation of McDonough and Shaw (2003) based on external and internal evaluation. The first textbook is "Business Objectives" (1996) by Vicki Hollett, and the second textbook is "Business Studies, Second Edition" (2002) by Alain Anderton. To avoid repetition, I will use BO and…

  20. Measuring Success: Evaluating Educational Programs

    ERIC Educational Resources Information Center

    Fisher, Yael

    2010-01-01

    This paper reveals a new evaluation model, which enables educational program and project managers to evaluate their programs with a simple and easy to understand approach. The "index of success model" is comprised of five parameters that enable to focus on and evaluate both the implementation and results of an educational program. The…

  1. Photos vs silhouettes for evaluation of profile esthetics between white and black evaluators.

    PubMed

    Pithon, Matheus Melo; Silva, Iane Souza Nery; Almeida, Indira Oliveira; Nery, Marine Soares; de Souza, Michele Luz; Barbosa, George; Dos Santos, Alex Ferreira; da Silva Coqueiro, Raildo

    2014-03-01

    To determine whether photos or silhouettes are adequate methods for evaluating the esthetic profiles of black subjects and whether black and white evaluators have different preferences for esthetic profiles. One photographic record of the profile of a black female patient with accentuated dental bimaxillary dentoalveolar protrusion was randomly selected. The image of the patient's profile was altered to produce a series of seven photos and seven silhouettes (a total of 14 images) with different lip positions but uniform distances in relation to the esthetic plane created by Ricketts (line E). Fifty black and 50 white lay evaluators were invited to enumerate the photos and silhouettes, produced according to the lip position, in the order in which they considered most esthetically pleasing. The number of preferences found to be within the esthetic norm was slightly higher among the photographs than among the silhouettes; the esthetic profile with a deviation of -2 mm from line E was elected as the most attractive, and the esthetic pattern with a deviation of +6 mm from the normal line E was considered the least attractive. There were no statistically significant differences between the preferences related to the variables race, sex, and educational background. The esthetic attractiveness of the facial profiles of black subjects in photos and silhouettes was evaluated in a similar manner among black and white evaluators. Among both black and white evaluators, the greatest preference was for the slightly concave profile, which was within the limit considered standard.

  2. Hardware Testing and System Evaluation: Procedures to Evaluate Commodity Hardware for Production Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goebel, J

    2004-02-27

    Without stable hardware any program will fail. The frustration and expense of supporting bad hardware can drain an organization, delay progress, and frustrate everyone involved. At Stanford Linear Accelerator Center (SLAC), we have created a testing method that helps our group, SLAC Computer Services (SCS), weed out potentially bad hardware and purchase the best hardware at the best possible cost. Commodity hardware changes often, so new evaluations happen periodically each time we purchase systems and minor re-evaluations happen for revised systems for our clusters, about twice a year. This general framework helps SCS perform correct, efficient evaluations. This article outlinesmore » SCS's computer testing methods and our system acceptance criteria. We expanded the basic ideas to other evaluations such as storage, and we think the methods outlined in this article has helped us choose hardware that is much more stable and supportable than our previous purchases. We have found that commodity hardware ranges in quality, so systematic method and tools for hardware evaluation were necessary. This article is based on one instance of a hardware purchase, but the guidelines apply to the general problem of purchasing commodity computer systems for production computational work.« less

  3. Strengthening Evaluation for Development

    ERIC Educational Resources Information Center

    Ofir, Zenda

    2013-01-01

    Although some argue that distinctions between "evaluation" and "development evaluation" are increasingly superfluous, it is important to recognize that some distinctions still matter. The severe vulnerabilities and power asymmetries inherent in most developing country systems and societies make the task of evaluation…

  4. Guidelines for Evaluating a Superintendent (To Assist School Board Members in Planning and in Evaluation).

    ERIC Educational Resources Information Center

    California School Boards Association, Sacramento.

    This publication is intended to aid local school board members in establishing procedures and priorities for evaluating the performance of their district superintendent. Except for a brief introductory section, the entire publication consists of a model comprehensive evaluation instrument. The evaluation model is organized in two main sections,…

  5. Reflections on Empowerment Evaluation: Learning from Experience.

    ERIC Educational Resources Information Center

    Fetterman, David M.

    1999-01-01

    Reflects on empowerment evaluation, the use of evaluation to foster improvement and self-determination. Empowerment evaluation uses quantitative and qualitative methods, and usually focuses on program evaluation. Discusses the growth in empowerment evaluation as a result of interest in participatory evaluation. (SLD)

  6. Evaluating Tenured Teachers: A Practical Approach.

    ERIC Educational Resources Information Center

    DePasquale, Daniel, Jr.

    1990-01-01

    Teachers with higher order needs benefit from expressing their creativity and exercising valued skills. The evaluation process should encourage experienced teachers to grow professionally and move toward self-actualization. The suggested evaluation model includes an evaluation conference, a choice of evaluation method, a planning conference, an…

  7. 48 CFR 3015.606-2 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Evaluation. 3015.606-2... Unsolicited Proposals 3015.606-2 Evaluation. (a) Comprehensive evaluations should be completed within sixty... contact point shall advise the offeror accordingly and provide a new evaluation completion date. The...

  8. On the Evaluation of Curriculum Reforms

    ERIC Educational Resources Information Center

    Hopmann, Stefan Thomas

    2003-01-01

    The paper considers the current international trend towards standards-based evaluation in a historical and comparative perspective. Based on a systematization of evaluation perspectives and tools, two basic patterns of curriculum control are discussed: process evaluation, and product evaluation. Whereas the first type has dominated the Continental…

  9. 10 CFR 712.15 - Management evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Management evaluation. 712.15 Section 712.15 Energy... Program Procedures § 712.15 Management evaluation. (a) Evaluation components. An evaluation by the HRP management official is required before an individual can be considered for initial certification or...

  10. 34 CFR 300.304 - Evaluation procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Placements Evaluations and Reevaluations § 300.304 Evaluation procedures. (a) Notice. The public agency must... evaluation, the public agency must— (1) Use a variety of assessment tools and strategies to gather relevant... procedures. Each public agency must ensure that— (1) Assessments and other evaluation materials used to...

  11. Process evaluation distributed system

    NASA Technical Reports Server (NTRS)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  12. NEXT GENERATION ANALYSIS SOFTWARE FOR COMPONENT EVALUATION - Results of Rotational Seismometer Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, D. M.; Merchant, B. J.; Abbott, R. E.

    2012-12-01

    The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.

  13. Evaluating the Impact of Training: A Collection of Federal Agency Evaluation Practices.

    ERIC Educational Resources Information Center

    Salinger, Ruth; Bartlett, Joan

    The purpose of this document is to share various approaches used by federal agencies to assess needs and measure training effectiveness. The emphasis in the descriptions is on the evaluation process rather than on the results. One program was evaluated by employing return-on-investment (ROI) data and using volunteer line personnel who conducted…

  14. Online Course Evaluations Response Rates

    ERIC Educational Resources Information Center

    Guder, Faruk; Malliaris, Mary

    2013-01-01

    This paper studies the reasons for low response rates in online evaluations. Survey data are collected from the students to understand factors that might affect student participation in the course evaluation process. When course evaluations were opened to the student body, an email announcement was sent to all students, and a reminder email was…

  15. Dissemination: Handmaiden to Evaluation Use

    ERIC Educational Resources Information Center

    Lawrenz, Frances; Gullickson, Arlen; Toal, Stacie

    2007-01-01

    Use of evaluation findings is a valued outcome for most evaluators. However, to optimize use, the findings need to be disseminated to potential users in formats that facilitate use of the information. This reflective case narrative uses a national evaluation of a multisite National Science Foundation (NSF) program as the setting for describing the…

  16. Peer Evaluation: A Case Study.

    ERIC Educational Resources Information Center

    Weaver, Richard L., II

    Acknowledging the value of peer evaluation in the classroom, this paper describes the peer evaluation system used in a basic speech communication course at an Ohio university. The first section of the paper defines peer evaluation and describes the situation at the university to provide some understanding of the context in which the system was…

  17. Formative Evaluations in Online Classes

    ERIC Educational Resources Information Center

    Peterson, Jennifer L.

    2016-01-01

    Online courses are continuing to become an important component of higher education course offerings. As the number of such courses increases, the need for quality course evaluations and course improvements is also increasing. However, there is not general agreement on the best ways to evaluate and use evaluation data to improve online courses.…

  18. Evaluation of English Websites on Dental Caries by Using Consumer Evaluation Tools.

    PubMed

    Blizniuk, Anastasiya; Furukawa, Sayaka; Ueno, Masayuki; Kawaguchi, Yoko

    2016-01-01

    To evaluate the quality of patient-oriented online information about dental caries using existing consumer evaluation tools and to judge the efficacy of these tools in quality assessment. The websites for the evaluation were pooled by using two general search engines (Google and Yahoo!). The search terms were: 'dental caries', 'tooth decay' and 'tooth cavity'. Three assessment tools (LIDA, DISCERN and FRES) were used to evaluate the quality of the information in the areas of accessibility, usability, reliability and readability. In total, 77 websites were analysed. The median scores of LIDA accessibility and usability were 45.0 and 8.0, respectively, which corresponded to a medium level of quality. The median reliability scores for LIDA (12.0) and DISCERN (20.0) both corresponded to low level of quality. The readability was high with the median FRES score 59.7. The websites on caries had good accessibility, usability and readability, while reliability of the information was poor. The LIDA instrument was found to be more convenient than DISCERN and can be recommended to lay people for quick quality assessment.

  19. Evaluation of Cariogenic Bacteria

    PubMed Central

    Nishikawara, Fusao; Nomura, Yoshiaki; Imai, Susumu; Senda, Akira; Hanada, Nobuhiro

    2007-01-01

    Objectives The evaluation of Mutans streptococci (MS) is one of the index for caries risk. DentocultTM and CRTTM are commercial kits to detect and evaluate MS, conveniently. However, the evaluation of MS has also been carried out simply using an instruction manual. But the instruction manual is not easy to use for evaluation of MS. The aim of this study was to examine the utility of modified Mitis-Salivalius Bacitracin (MSB) agar medium compared with MSB agar medium and commercial kits, and to establish a convenient kit (mMSB-kit) using modified MSB agar. Methods The MS in stimulated saliva from 27 subjects were detected by MSB, modified MSB agar medium and commercial kits. Laboratory and clinically isolated strains of MS were similarly evaluated. The ratios of MS in detected bacteria were compared by ELISA. Results The scores using an mMSB-kit on the basis of modified MSB agar medium were tabulated. Saliva samples showed different levels of MS between culture methods and the commercial kit. Some samples which were full of MS were not detected by the commercial kit. The detection of MS by modified MSB agar medium and mMSB-kit were significantly higher when compared with MSB agar medium,CRTTM, (P< .01) and Dentocult SMTM (P<.05). Conclusion The sensitivity for detection of MS is higher for modified MSB agar medium when compared with MSB agar medium. The mMSB-kit can be used simply, and can be an important contributor for the evaluation of MS as a caries risk factor. PMID:19212495

  20. Evaluation in the Context of the Government Market Place: Implications for the Evaluation of Research

    ERIC Educational Resources Information Center

    Della-Piana, Connie Kubo; Della-Piana, Gabriel M.

    2007-01-01

    While the current debate in the evaluation community has concentrated on examining and explicating implications of the choice of methods for evaluating federal programs, the authors of this paper address the challenges faced by the government in the selection of funding mechanisms for supporting program evaluation efforts. The choice of funding…

  1. The Impact of Evaluation: Lessons Drawn from the Evaluations of Five Early Childhood Education Programs.

    ERIC Educational Resources Information Center

    Granville, Arthur C.; And Others

    Five different program evaluations were described to indicate those qualities which make an evaluation effective or not effective. Evaluation effectiveness was defined as impact on decision making or long-term policy formation, and influence upon a variety of audiences. Robert D. Matz described the First Chance Project, and concluded that the…

  2. 22 CFR 1701.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Self-evaluation. 1701.110 Section 1701.110...-evaluation. (a) The agency shall, by November 28, 1994, evaluate its current policies and practices, and the... handicaps or organizations representing individuals with handicaps, to participate in the self-evaluation...

  3. 25 CFR 720.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Self-evaluation. 720.110 Section 720.110 Indians THE...-evaluation. (a) The agency shall, by August 24, 1987, evaluate its current policies and practices, and the... or organizations representing handicapped persons, to participate in the self-evaluation process by...

  4. Broadening the Discussion about Evaluator Advocacy

    ERIC Educational Resources Information Center

    Hendricks, Michael

    2014-01-01

    This issue of "American Journal of Evaluation" presents commentaries by evaluation leaders, George Grob and Rakesh Mohan, which draw upon their wealth of practical experience to address questions about evaluator advocacy, including What is meant by the word "advocacy"? Should evaluators ever advocate? If so, when and how? What…

  5. Alternative Aviation Jet Fuel Sustainability Evaluation Report Task 1 : Report Evaluating Existing Sustainability Evaluation Programs

    DOT National Transportation Integrated Search

    2011-10-25

    This report describes how existing biofuel sustainability evaluation programs meet requirements that are under consideration or are in early phases of adoption and implementation in various US and international contexts. Biofuel sustainability evalua...

  6. Evaluation in Geographic Education.

    ERIC Educational Resources Information Center

    Kurfman, Dana G., Ed.

    This second yearbook of the National Council for Geographic Education presents recent thinking about the formulation and assessment of the educational outcomes of geography. Dana G. Kurfman overviews "Evaluation Developments Useful in Geographic Education" relating evaluation to decision making, objectives, data gatherings, and data…

  7. 18 CFR 1313.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Self-evaluation. 1313... VALLEY AUTHORITY § 1313.110 Self-evaluation. (a) The agency shall, by August 24, 1987, evaluate its... participate in the self-evaluation process by submitting comments (both oral and written). (c) The agency...

  8. 49 CFR 807.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Self-evaluation. 807.110 Section 807.110... TRANSPORTATION SAFETY BOARD § 807.110 Self-evaluation. (a) The agency shall, by April 9, 1987, evaluate its... participate in the self-evaluation process by submitting comments (both oral and written). (c) The agency...

  9. Evaluation Methods of The Text Entities

    ERIC Educational Resources Information Center

    Popa, Marius

    2006-01-01

    The paper highlights some evaluation methods to assess the quality characteristics of the text entities. The main concepts used in building and evaluation processes of the text entities are presented. Also, some aggregated metrics for orthogonality measurements are presented. The evaluation process for automatic evaluation of the text entities is…

  10. The Role for an Evaluator: A Fundamental Issue for Evaluation of Education and Social Programs

    ERIC Educational Resources Information Center

    Luo, Heng

    2010-01-01

    This paper discusses one of the fundamental issues in education and social program evaluation: the proper role for an evaluator. Based on respective and comparative analysis of five theorists' positions on this fundamental issue, this paper reveals how different perspectives on other fundamental issues in evaluation such as value, methods, use and…

  11. How Do You Evaluate Everyone Who Isn't a Teacher? An Adaptable Evaluation Model for Professional Support Personnel.

    ERIC Educational Resources Information Center

    Stronge, James H.; And Others

    The evaluation of professional support personnel in the schools has been a neglected area in educational evaluation. The Center for Research on Educational Accountability and Teacher Evaluation (CREATE) has worked to develop a conceptually sound evaluation model and then to translate the model into practical evaluation procedures that facilitate…

  12. Measures of Success for Earth System Science Education: The DLESE Evaluation Services and the Evaluation Toolkit Collection

    NASA Astrophysics Data System (ADS)

    McCaffrey, M. S.; Buhr, S. M.; Lynds, S.

    2005-12-01

    Increased agency emphasis upon the integration of research and education coupled with the ability to provide students with access to digital background materials, learning activities and primary data sources has begun to revolutionize Earth science education in formal and informal settings. The DLESE Evaluation Services team and the related Evaluation Toolkit collection (http://www.dlese.org/cms/evalservices/ ) provides services and tools for education project leads and educators. Through the Evaluation Toolkit, educators may access high-quality digital materials to assess students' cognitive gains, examples of alternative assessments, and case studies and exemplars of authentic research. The DLESE Evaluation Services team provides support for those who are developing evaluation plans on an as-requested basis. In addition, the Toolkit provides authoritative peer reviewed articlesabout evaluation research techniques and strategies of particular importance to geoscience education. This paper will provide an overview of the DLESE Evaluation Toolkit and discuss challenges and best practices for assessing student learning and evaluating Earth system sciences education in a digital world.

  13. The Evaluation of Instruction.

    ERIC Educational Resources Information Center

    Kanaga, Kim

    Some of the theoretical and methodological problems with current practices in evaluating instruction at the higher education level are reviewed. Controversy over the evaluation of instruction in higher education has resulted at least in part from inadequate instrumentation. The instruments for instructional rating now used include administrator…

  14. Advances in Collaborative Evaluation

    ERIC Educational Resources Information Center

    Rodriguez-Campos, Liliana

    2012-01-01

    Collaborative evaluation is an approach that offers, among others, many advantages in terms of access to information, quality of information gathered, opportunities for creative problem-solving, and receptivity to findings. In the last decade, collaborative evaluation has grown in popularity along with similar participatory, empowerment, and…

  15. MVP and Faculty Evaluation

    ERIC Educational Resources Information Center

    Theall, Michael

    2017-01-01

    This chapter considers faculty evaluation and motivational and volitional issues. The focus is on the ways in which faculty evaluation influences not only faculty attitudes and beliefs but also willingness to engage in professional development and instructional improvement programs. Recommendations for effective practice that enhances motivation…

  16. Workforce Training Program Evaluations.

    ERIC Educational Resources Information Center

    Washington State Workforce Training and Education Coordinating Board, Olympia.

    Three evaluations analyzed program characteristics and participant results in work force education and coordination in Washington State. The Employment Security Department's evaluation of Job Training Partnership Act Titles II and III looked for an association between participant characteristics and the type of training they receive and between…

  17. Cognitive Relativism in Evaluation.

    ERIC Educational Resources Information Center

    Alexander, H. A.

    1986-01-01

    A group of defenses of qualitative evaluation methods is examined, based on a hard relativistic interpretation of the work of Thomas Kuhn. A promising defense of qualitative evaluation may be found in a soft-relativist interpretation of Kuhn's analysis of the nature of scientific discovery. (Author/LMO)

  18. Developing Evaluation Capacity through Process Use

    ERIC Educational Resources Information Center

    King, Jean A.

    2007-01-01

    This article discusses how to make process use an independent variable in evaluation practice: the purposeful means of building an organization's capacity to conduct and use evaluations in the long run. The goal of evaluation capacity building (ECB) is to strengthen and sustain effective program evaluation practices through a number of activities:…

  19. 42 CFR 491.11 - Program evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Program evaluation. 491.11 Section 491.11 Public... Certification; and FQHCs Conditions for Coverage § 491.11 Program evaluation. (a) The clinic or center carries out, or arranges for, an annual evaluation of its total program. (b) The evaluation includes review of...

  20. 20 CFR 365.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Self-evaluation. 365.110 Section 365.110... § 365.110 Self-evaluation. (a) The agency shall, by December 27, 1989, evaluate its current policies and... self-evaluation process by submitting comments (both oral and written). (c) The agency shall, until at...

  1. 22 CFR 711.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Self-evaluation. 711.110 Section 711.110 Foreign... CORPORATION § 711.110 Self-evaluation. (a) The agency shall, by September 6, 1989, evaluate its current... participate in the self-evaluation process by submitting comments (both oral and written). (c) The agency...

  2. 29 CFR 100.510 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 2 2010-07-01 2010-07-01 false Self-evaluation. 100.510 Section 100.510 Labor Regulations... § 100.510 Self-evaluation. (a) The agency shall, by September 6, 1989, evaluate its current policies and... participate in the self-evaluation process by submitting comments (both oral and written). (c) The agency...

  3. Theory-based evaluation of a comprehensive Latino education initiative: an interactive evaluation approach.

    PubMed

    Nesman, Teresa M; Batsche, Catherine; Hernandez, Mario

    2007-08-01

    Latino student access to higher education has received significant national attention in recent years. This article describes a theory-based evaluation approach used with ENLACE of Hillsborough, a 5-year project funded by the W.K. Kellogg Foundation for the purpose of increasing Latino student graduation from high school and college. Theory-based evaluation guided planning, implementation as well as evaluation through the process of developing consensus on the Latino population of focus, adoption of culturally appropriate principles and values to guide the project, and identification of strategies to reach, engage, and impact outcomes for Latino students and their families. The approach included interactive development of logic models that focused the scope of interventions and guided evaluation designs for addressing three stages of the initiative. Challenges and opportunities created by the approach are discussed, as well as ways in which the initiative impacted Latino students and collaborating educational institutions.

  4. Fear of Negative Evaluation Biases Social Evaluation Inference: Evidence from a Probabilistic Learning Task

    PubMed Central

    Button, Katherine S.; Kounali, Daphne; Stapinski, Lexine; Rapee, Ronald M.; Lewis, Glyn; Munafò, Marcus R.

    2015-01-01

    Background Fear of negative evaluation (FNE) defines social anxiety yet the process of inferring social evaluation, and its potential role in maintaining social anxiety, is poorly understood. We developed an instrumental learning task to model social evaluation learning, predicting that FNE would specifically bias learning about the self but not others. Methods During six test blocks (3 self-referential, 3 other-referential), participants (n = 100) met six personas and selected a word from a positive/negative pair to finish their social evaluation sentences “I think [you are / George is]…”. Feedback contingencies corresponded to 3 rules, liked, neutral and disliked, with P[positive word correct] = 0.8, 0.5 and 0.2, respectively. Results As FNE increased participants selected fewer positive words (β = −0.4, 95% CI −0.7, −0.2, p = 0.001), which was strongest in the self-referential condition (FNE × condition 0.28, 95% CI 0.01, 0.54, p = 0.04), and the neutral and dislike rules (FNE × condition × rule, p = 0.07). At low FNE the proportion of positive words selected for self-neutral and self-disliked greatly exceeded the feedback contingency, indicating poor learning, which improved as FNE increased. Conclusions FNE is associated with differences in processing social-evaluative information specifically about the self. At low FNE this manifests as insensitivity to learning negative self-referential evaluation. High FNE individuals are equally sensitive to learning positive or negative evaluation, which although objectively more accurate, may have detrimental effects on mental health. PMID:25853835

  5. The AME2016 atomic mass evaluation (I). Evaluation of input data; and adjustment procedures

    NASA Astrophysics Data System (ADS)

    Huang, W. J.; Audi, G.; Wang, Meng; Kondev, F. G.; Naimi, S.; Xu, Xing

    2017-03-01

    This paper is the first of two articles (Part I and Part II) that presents the results of the new atomic mass evaluation, AME2016. It includes complete information on the experimental input data (also including unused and rejected ones), as well as details on the evaluation procedures used to derive the tables of recommended values given in the second part. This article describes the evaluation philosophy and procedures that were implemented in the selection of specific nuclear reaction, decay and mass-spectrometric results. These input values were entered in the least-squares adjustment for determining the best values for the atomic masses and their uncertainties. Details of the calculation and particularities of the AME are then described. All accepted and rejected data, including outweighted ones, are presented in a tabular format and compared with the adjusted values obtained using the least-squares fit analysis. Differences with the previous AME2012 evaluation are discussed and specific information is presented for several cases that may be of interest to AME users. The second AME2016 article gives a table with the recommended values of atomic masses, as well as tables and graphs of derived quantities, along with the list of references used in both the AME2016 and the NUBASE2016 evaluations (the first paper in this issue). AMDC: http://amdc.impcas.ac.cn/ Contents The AME2016 atomic mass evaluation (I). Evaluation of input data; and adjustment proceduresAcrobat PDF (1.2 MB) Table I. Input data compared with adjusted valuesAcrobat PDF (1.3 MB)

  6. Integrated Assessment Model Evaluation

    NASA Astrophysics Data System (ADS)

    Smith, S. J.; Clarke, L.; Edmonds, J. A.; Weyant, J. P.

    2012-12-01

    Integrated assessment models of climate change (IAMs) are widely used to provide insights into the dynamics of the coupled human and socio-economic system, including emission mitigation analysis and the generation of future emission scenarios. Similar to the climate modeling community, the integrated assessment community has a two decade history of model inter-comparison, which has served as one of the primary venues for model evaluation and confirmation. While analysis of historical trends in the socio-economic system has long played a key role in diagnostics of future scenarios from IAMs, formal hindcast experiments are just now being contemplated as evaluation exercises. Some initial thoughts on setting up such IAM evaluation experiments are discussed. Socio-economic systems do not follow strict physical laws, which means that evaluation needs to take place in a context, unlike that of physical system models, in which there are few fixed, unchanging relationships. Of course strict validation of even earth system models is not possible (Oreskes etal 2004), a fact borne out by the inability of models to constrain the climate sensitivity. Energy-system models have also been grappling with some of the same questions over the last quarter century. For example, one of "the many questions in the energy field that are waiting for answers in the next 20 years" identified by Hans Landsberg in 1985 was "Will the price of oil resume its upward movement?" Of course we are still asking this question today. While, arguably, even fewer constraints apply to socio-economic systems, numerous historical trends and patterns have been identified, although often only in broad terms, that are used to guide the development of model components, parameter ranges, and scenario assumptions. IAM evaluation exercises are expected to provide useful information for interpreting model results and improving model behavior. A key step is the recognition of model boundaries, that is, what is inside

  7. Commentary: Can This Evaluation Be Saved?

    ERIC Educational Resources Information Center

    Ginsberg, Pauline E.

    2004-01-01

    Can this evaluation be saved? More precisely, can this evaluation be saved in such a way that both evaluator and client feel satisfied that their points of view were respected and both agree that the evaluation itself provides valid information obtained in a principled manner? Because the scenario describes a preliminary discussion and no contract…

  8. Creating Robust Evaluation of ATE Projects

    ERIC Educational Resources Information Center

    Eddy, Pamela L.

    2017-01-01

    Funded grant projects all involve some form of evaluation, and Advanced Technological Education (ATE) grants are no exception. Program evaluation serves as a critical component not only for evaluating if a project has met its intended and desired outcomes, but the evaluation process is also a central feature of the grant application itself.…

  9. Evaluation and its importance for nursing practice.

    PubMed

    Moule, Pam; Armoogum, Julie; Douglass, Emma; Taylor, Dr Julie

    2017-04-26

    Evaluation of service delivery is an important aspect of nursing practice. Service evaluation is being increasingly used and led by nurses, who are well placed to evaluate service and practice delivery. This article defines evaluation of services and wider care delivery and its relevance in NHS practice and policy. It aims to encourage nurses to think about how evaluation of services or practice differs from research and audit activity and to consider why and how they should use evaluation in their practice. A process for planning and conducting an evaluation and disseminating findings is presented. Evaluation in the healthcare context can be a complicated activity and some of the potential challenges of evaluation are described, alongside possible solutions. Further resources and guidance on evaluation activity to support nurses' ongoing development are identified.

  10. Measuring Evaluation Fears in Adolescence: Psychometric Validation of the Portuguese Versions of the Fear of Positive Evaluation Scale and the Specific Fear of Negative Evaluation Scale

    ERIC Educational Resources Information Center

    Vagos, Paula; Salvador, Maria do Céu; Rijo, Daniel; Santos, Isabel M.; Weeks, Justin W.; Heimberg, Richard G.

    2016-01-01

    Modified measures of Fear of Negative Evaluation and Fear of Positive Evaluation were examined among Portuguese adolescents. These measures demonstrated replicable factor structure, internal consistency, and positive relationships with social anxiety and avoidance. Gender differences were found. Implications for evaluation and intervention are…

  11. Choosing a Truly External Evaluator

    ERIC Educational Resources Information Center

    Ray, Marilyn

    2006-01-01

    This scenario discusses a situation in which a proposal has been published by a consortium of foundations for an "external" evaluator to evaluate a replication at two new sites of a program they have been funding for many years. A proposal is received from Dr. Porto-Novo, who has been the external evaluator of the initial program for about 10…

  12. Impact of Digital Tooth Preparation Evaluation Technology on Preclinical Dental Students' Technical and Self-Evaluation Skills.

    PubMed

    Gratton, David G; Kwon, So Ran; Blanchette, Derek; Aquilino, Steven A

    2016-01-01

    The aim of this study was to evaluate the effect of digital tooth preparation imaging and evaluation technology on dental students' technical abilities, self-evaluation skills, and the assessment of their simulated clinical work. A total of 80 second-year students at one U.S. dental school were assigned to one of three groups: control (n=40), E4D Compare (n=20), and Sirona prepCheck (n=20). Students in the control group were taught by traditional teaching methodologies, and the technology-assisted groups received both traditional training and supplementary feedback from the corresponding digital system. Three outcomes were measured: faculty technical score, self-evaluation score, and E4D Compare scores at 0.30 mm tolerance. Correlations were determined between the groups' scores from visual assessment and self-evaluation and between the visual assessment and digital scores. The results showed that the visual assessment and self-evaluation scores did not differ among groups (p>0.05). Overall, correlations between visual and digital assessment scores were modest though statistically significant (5% level of significance). These results suggest that the use of digital tooth preparation evaluation technology did not impact the students' prosthodontic technical and self-evaluation skills. Visual scores given by faculty and digital assessment scores correlated moderately in only two instances.

  13. Evaluation of ridesharing programs in Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulp, G.; Tsao, H.J.; Webber, R.E.

    1982-10-01

    The design, implementation, and results of a carpool and vanpool evaluation are described. Objectives of the evaluation were: to develop credible estimates of the energy savings attributable to the ridesharing program, to provide information for improving the performance of the ridesharing program, and to add to a general understanding of the ridesharing process. Previous evaluation work is critiqued and the research methodology adopted for this study is discussed. The ridesharing program in Michigan is described and the basis for selecting Michigan as the evaluation site is discussed. The evaluation methodology is presented, including research design, sampling procedure, data collection, andmore » data validation. Evaluation results are analyzed. (LEW)« less

  14. Independent evaluation: insights from public accounting.

    PubMed

    Brown, Abigail B; Klerman, Jacob Alex

    2012-06-01

    Maintaining the independence of contract government program evaluation presents significant contracting challenges. The ideal outcome for an agency is often both the impression of an independent evaluation and a glowing report. In this, independent evaluation is like financial statement audits: firm management wants both a public accounting firm to attest to the fairness of its financial accounts and to be allowed to account for transactions as it sees fit. In both cases, the evaluation or audit is being conducted on behalf of outsiders--the public or shareholders--but is overseen by a party with significant interests at stake in the outcome-the agency being evaluated or executive management of the firm. We review the contracting strategies developed to maintain independence in auditing. We examine evidence on the effectiveness of professionalism, reputation, liability and owner oversight in constraining behavior in auditing. We then establish parallels with contracting for evaluations and apply these insights to changes that might maintain and improve evaluator independence. By analogy with the Sarbanes Oxley Act of 2002 reforms in auditing, we recommend exploring using a reformulated Technical Working Group to encourage more prompt release of more evaluation results and to help insulate evaluators from inappropriate pressure to change their results or analysis approach.

  15. Empirically evaluating decision-analytic models.

    PubMed

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  16. An Evaluation of TCITY: The Twin City Institute for Talented Youth. Report #1 in Evaluation Report Series.

    ERIC Educational Resources Information Center

    Stake, Robert E.; Gjerde, Craig

    This evaluation of the Twin City Institute for Talented Youth, a summer program for gifted students in grades 9 through 12, consists of two parts: a description of the program; and the evaluators' assessments, including advocate and adversary reports. Achievement tests were not used for evaluation. Evaluative comments follow each segment of the…

  17. What and How Are We Evaluating? Meta-Evaluation Study of the NASA Innovations in Climate Education (NICE) Portfolio

    NASA Astrophysics Data System (ADS)

    Martin, A. M.; Barnes, M. H.; Chambers, L. H.; Pippin, M. R.

    2011-12-01

    As part of NASA's Minority University Research and Education Program (MUREP), the NASA Innovations in Climate Education (NICE) project at Langley Research Center has funded 71 climate education initiatives since 2008. The funded initiatives span across the nation and contribute to the development of a climate-literate public and the preparation of a climate-related STEM workforce through research experiences, professional development opportunities, development of data access and modeling tools, and educational opportunities in both K-12 and higher education. Each of the funded projects proposes and carries out its own evaluation plan, in collaboration with external or internal evaluation experts. Using this portfolio as an exemplar case, NICE has undertaken a systematic meta-evaluation of these plans, focused primarily on evaluation questions, approaches, and methods. This meta-evaluation study seeks to understand the range of evaluations represented in the NICE portfolio, including descriptive information (what evaluations, questions, designs, approaches, and methods are applied?) and questions of value (do these evaluations meet the needs of projects and their staff, and of NASA/NICE?). In the current climate, as federal funders of climate change and STEM education projects seek to better understand and incorporate evaluation into their decisions, evaluators and project leaders are also seeking to build robust understanding of program effectiveness. Meta-evaluations like this provide some baseline understanding of the current status quo and the kinds of evaluations carried out within such funding portfolios. These explorations are needed to understand the common ground between evaluative best practices, limited resources, and agencies' desires, capacity, and requirements. When NASA asks for evaluation of funded projects, what happens? Which questions are asked and answered, using which tools? To what extent do the evaluations meet the needs of projects and

  18. What and How Are We Evaluating? Meta-Evaluation Study of the NASA Innovations in Climate Education (NICE) Portfolio

    NASA Astrophysics Data System (ADS)

    Martin, A. M.; Barnes, M. H.; Chambers, L. H.; Pippin, M. R.

    2013-12-01

    As part of NASA's Minority University Research and Education Program (MUREP), the NASA Innovations in Climate Education (NICE) project at Langley Research Center has funded 71 climate education initiatives since 2008. The funded initiatives span across the nation and contribute to the development of a climate-literate public and the preparation of a climate-related STEM workforce through research experiences, professional development opportunities, development of data access and modeling tools, and educational opportunities in both K-12 and higher education. Each of the funded projects proposes and carries out its own evaluation plan, in collaboration with external or internal evaluation experts. Using this portfolio as an exemplar case, NICE has undertaken a systematic meta-evaluation of these plans, focused primarily on evaluation questions, approaches, and methods. This meta-evaluation study seeks to understand the range of evaluations represented in the NICE portfolio, including descriptive information (what evaluations, questions, designs, approaches, and methods are applied?) and questions of value (do these evaluations meet the needs of projects and their staff, and of NASA/NICE?). In the current climate, as federal funders of climate change and STEM education projects seek to better understand and incorporate evaluation into their decisions, evaluators and project leaders are also seeking to build robust understanding of program effectiveness. Meta-evaluations like this provide some baseline understanding of the current status quo and the kinds of evaluations carried out within such funding portfolios. These explorations are needed to understand the common ground between evaluative best practices, limited resources, and agencies' desires, capacity, and requirements. When NASA asks for evaluation of funded projects, what happens? Which questions are asked and answered, using which tools? To what extent do the evaluations meet the needs of projects and

  19. CEA SMAD 2016 Digitizer Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.

    Sandia National Laboratories has tested and evaluated an updated SMAD digitizer, developed by the French Alternative Energies and Atomic Energy Commission (CEA). The SMAD digitizers are intended to record sensor output for seismic and infrasound monitoring applications. The purpose of this digitizer evaluation is to measure the performance characteristics in such areas as power consumption, input impedance, sensitivity, full scale, self-noise, dynamic range, system noise, response, passband, and timing. The SMAD digitizers have been updated since their last evaluation by Sandia to improve their performance when recording at a sample rate of 20 Hz for infrasound applications and 100 Hzmore » for hydro-acoustic seismic stations. This evaluation focuses primarily on the 20 Hz and 100 Hz sample rates. The SMAD digitizers are being evaluated for potential use in the International Monitoring System (IMS) of the Comprehensive Nuclear Test- Ban-Treaty Organization (CTBTO).« less

  20. Effectiveness of the Marine Corps’ Junior Enlisted Performance Evaluation System: An Evaluation of Proficiency and Conduct Marks

    DTIC Science & Technology

    2017-03-01

    THE MARINE CORPS’ JUNIOR ENLISTED PERFORMANCE EVALUATION SYSTEM: AN EVALUATION OF PROFICIENCY AND CONDUCT MARKS by Richard B. Larger Jr...CORPS’ JUNIOR ENLISTED PERFORMANCE EVALUATION SYSTEM: AN EVALUATION OF PROFICIENCY AND CONDUCT MARKS 5. FUNDING NUMBERS 6. AUTHOR(S) Richard B...in order to improve interpretability and minimize redundancies. 14. SUBJECT TERMS performance evaluation , proficiency marks, conduct marks

  1. [Evaluation of medication risk in pregnant women: methodology of evaluation and risk management].

    PubMed

    Eléfant, E; Sainte-Croix, A

    1997-01-01

    This round table discussion was devoted to the description of the tools currently available for the evaluation of drug risks and management during pregnancy. Five topics were submitted for discussion: pre-clinical data, methodological tools, benefit/risk ratio before prescription, teratogenic or fetal risk evaluation, legal comments.

  2. Data-Intensive Evaluation: The Concept, Methods, and Prospects of Higher Education Monitoring Evaluation

    ERIC Educational Resources Information Center

    Wang, Zhanjun; Qiao, Weifeng; Li, Jiangbo

    2016-01-01

    Higher education monitoring evaluation is a process that uses modern information technology to continually collect and deeply analyze relevant data, visually present the state of higher education, and provide an objective basis for value judgments and scientific decision making by diverse bodies Higher education monitoring evaluation is…

  3. Industrial laser welding evaluation study

    NASA Technical Reports Server (NTRS)

    Hella, R.; Locke, E.; Ream, S.

    1974-01-01

    High power laser welding was evaluated for fabricating space vehicle boosters. This evaluation was made for 1/4 in. and 1/2 in. aluminum (2219) and 1/4 in. and 1/2 in. D6AC steel. The Avco HPL 10 kW industrial laser was used to perform the evaluation. The objective has been achieved through the completion of the following technical tasks: (1) parameter study to optimize welding and material parameters; (2) preparation of welded panels for MSFC evaluation; and (3) demonstration of the repeatability of laser welding equipment. In addition, the design concept for a laser welding system capable of welding large space vehicle boosters has been developed.

  4. Teacher Evaluation. Policy Brief.

    ERIC Educational Resources Information Center

    Glass, Gene V.

    2004-01-01

    Traditional forms of evaluating teachers (e.g., inspection of credentials, supervisor and peer observation and rating) for purposes of hiring, promotion, and salary increases have served the profession of teaching well for decades and should receive continued support in policy and practice. Newer forms of evaluation--primarily paper-and-pencil…

  5. International Perspectives on Evaluation

    ERIC Educational Resources Information Center

    Schwandt, Thomas A.

    2013-01-01

    Over the past year, the Board of Directors of the American Evaluation Association (AEA) has been discussing ways in which AEA can strengthen its relationships and build collaborative partnerships within the international evaluation community as well as increase AEA members' awareness of and capacity to engage issues that shape evaluation…

  6. Evaluating New Technology.

    PubMed

    Carniol, Paul J; Heffelfinger, Ryan N; Grunebaum, Lisa D

    2018-05-01

    There are multiple complex issues to consider when evaluating any new technology. First evaluate the efficacy of the device. Then considering your patient population decide whether this technology brings an added benefit to your patients. If it meets these 2 criteria, then proceed to the financial analysis of acquiring this technology. The complete financial analysis has several important components that include but are not limited to cost, value, alternatives, return on investment, and associated marketing expense. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Aligning Collaborative and Culturally Responsive Evaluation Approaches

    ERIC Educational Resources Information Center

    Askew, Karyl; Beverly, Monifa Green; Jay, Michelle L.

    2012-01-01

    The authors, three African-American women trained as collaborative evaluators, offer a comparative analysis of collaborative evaluation (O'Sullivan, 2004) and culturally responsive evaluation approaches (Frierson, Hood, & Hughes, 2002; Kirkhart & Hopson, 2010). Collaborative evaluation techniques immerse evaluators in the cultural milieu…

  8. Institutional design and utilization of evaluation: a contribution to a theory of evaluation influence based on Swiss experience.

    PubMed

    Balthasar, Andreas

    2009-06-01

    Growing interest in the institutionalization of evaluation in the public administration raises the question as to which institutional arrangement offers optimal conditions for the utilization of evaluations. Institutional arrangement denotes the formal organization of processes and competencies, together with procedural rules, that are applicable independently of individual evaluation projects. It reflects the evaluation practice of an institution and defines the distance between evaluators and evaluees. This article outlines the results of a broad-based study of all 300 or so evaluations that the Swiss Federal Administration completed from 1999 to 2002. On this basis, it derives a theory of the influence of institutional factors on the utilization of evaluations.

  9. Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis.

    PubMed

    Chen, Huey T

    2016-12-01

    Theories of program and theories of evaluation form the foundation of program evaluation theories. Theories of program reflect assumptions on how to conceptualize an intervention program for evaluation purposes, while theories of evaluation reflect assumptions on how to design useful evaluation. These two types of theories are related, but often discussed separately. This paper attempts to use three theoretical perspectives (reductionism, systems thinking, and pragmatic synthesis) to interface them and discuss the implications for evaluation practice. Reductionism proposes that an intervention program can be broken into crucial components for rigorous analyses; systems thinking view an intervention program as dynamic and complex, requiring a holistic examination. In spite of their contributions, reductionism and systems thinking represent the extreme ends of a theoretical spectrum; many real-world programs, however, may fall in the middle. Pragmatic synthesis is being developed to serve these moderate- complexity programs. These three theoretical perspectives have their own strengths and challenges. Knowledge on these three perspectives and their evaluation implications can provide a better guide for designing fruitful evaluations, improving the quality of evaluation practice, informing potential areas for developing cutting-edge evaluation approaches, and contributing to advancing program evaluation toward a mature applied science. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Solar energy program evaluation: an introduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    deLeon, P.

    The Program Evaluation Methodology provides an overview of the practice and methodology of program evaluation and defines more precisely the evaluation techniques and methodologies that would be most appropriate to government organizations which are actively involved in the research, development, and commercialization of solar energy systems. Formal evaluation cannot be treated as a single methodological approach for assessing a program. There are four basic types of evaluation designs - the pre-experimental design; the quasi-experimental design based on time series; the quasi-experimental design based on comparison groups; and the true experimental design. This report is organized to first introduce the rolemore » and issues of evaluation. This is to provide a set of issues to organize the subsequent sections detailing the national solar energy programs. Then, these two themes are integrated by examining the evaluation strategies and methodologies tailored to fit the particular needs of the various individual solar energy programs. (MCW)« less

  11. Using Economic Methods Evaluatively

    ERIC Educational Resources Information Center

    King, Julian

    2017-01-01

    As evaluators, we are often asked to determine whether policies and programs provide value for the resources invested. Addressing that question can be a quandary, and, in some cases, evaluators question whether cost-benefit analysis is fit for this purpose. With increased interest globally in social enterprise, impact investing, and social impact…

  12. 1981-1982 Evaluation Findings.

    ERIC Educational Resources Information Center

    Austin Independent School District, TX. Office of Research and Evaluation.

    The findings in evaluation and testing activities of the Austin Independent School District (AISD) during the 1981-82 school year are summarized. The first section, "1982 at a Glance," discusses the evaluation findings as a whole. Final reports and abstracts of related reports on achievement test results are presented for the district…

  13. Evaluation of Bibliographic Instruction.

    ERIC Educational Resources Information Center

    Hardesty, Larry

    Arguing that there is a current tendency among librarians to talk more about the evaluation of bibliographic instruction than to actually do anything about it, this paper examines limitations of and considerations pertaining to evaluation and includes: (1) a brief discussion of the history of bibliographic instruction; (2) discussion of types of…

  14. Proof Constructions and Evaluations

    ERIC Educational Resources Information Center

    Stylianides, Andreas J.; Stylianides, Gabriel J.

    2009-01-01

    In this article, we focus on a group of 39 prospective elementary (grades K-6) teachers who had rich experiences with proof, and we examine their ability to construct proofs and evaluate their own constructions. We claim that the combined "construction-evaluation" activity helps illuminate certain aspects of prospective teachers' and presumably…

  15. Theory-Based Stakeholder Evaluation

    ERIC Educational Resources Information Center

    Hansen, Morten Balle; Vedung, Evert

    2010-01-01

    This article introduces a new approach to program theory evaluation called theory-based stakeholder evaluation or the TSE model for short. Most theory-based approaches are program theory driven and some are stakeholder oriented as well. Practically, all of the latter fuse the program perceptions of the various stakeholder groups into one unitary…

  16. Non-formal educator use of evaluation results.

    PubMed

    Baughman, Sarah; Boyd, Heather H; Franz, Nancy K

    2012-08-01

    Increasing demands for accountability in educational programming have resulted in increasing calls for program evaluation in educational organizations. Many organizations include conducting program evaluations as part of the job responsibilities of program staff. Cooperative Extension is a complex organization offering non-formal educational programs through land grant universities. Many Extension services require non-formal educational program evaluations be conducted by field-based Extension educators. Evaluation research has focused primarily on the efforts of professional, external evaluators. The work of program staff with many responsibilities including program evaluation has received little attention. This study examined how field based Extension educators (i.e. program staff) in four Extension services use the results of evaluations of programs that they have conducted themselves. Four types of evaluation use are measured and explored; instrumental use, conceptual use, persuasive use and process use. Results indicate that there are few programmatic changes as a result of evaluation findings among the non-formal educators surveyed in this study. Extension educators tend to use evaluation results to persuade others about the value of their programs and learn from the evaluation process. Evaluation use is driven by accountability measures with very little program improvement use as measured in this study. Practical implications include delineating accountability and program improvement tasks within complex organizations in order to align evaluation efforts and to improve the results of both. There is some evidence that evaluation capacity building efforts may be increasing instrumental use by educators evaluating their own programs. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. 48 CFR 1315.305 - Proposal evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Proposal evaluation. 1315... AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 1315.305 Proposal evaluation. At the... evaluation team. ...

  18. Evaluation of the Neutron Data Standards

    NASA Astrophysics Data System (ADS)

    Carlson, A. D.; Pronyaev, V. G.; Capote, R.; Hale, G. M.; Chen, Z.-P.; Duran, I.; Hambsch, F.-J.; Kunieda, S.; Mannhart, W.; Marcinkevicius, B.; Nelson, R. O.; Neudecker, D.; Noguere, G.; Paris, M.; Simakov, S. P.; Schillebeeckx, P.; Smith, D. L.; Tao, X.; Trkov, A.; Wallner, A.; Wang, W.

    2018-02-01

    With the need for improving existing nuclear data evaluations, (e.g., ENDF/B-VIII.0 and JEFF-3.3 releases) the first step was to evaluate the standards for use in such a library. This new standards evaluation made use of improved experimental data and some developments in the methodology of analysis and evaluation. In addition to the work on the traditional standards, this work produced the extension of some energy ranges and includes new reactions that are called reference cross sections. Since the effort extends beyond the traditional standards, it is called the neutron data standards evaluation. This international effort has produced new evaluations of the following cross section standards: the H(n,n), 6Li(n,t), 10B(n,α), 10B(n,α1 γ), natC(n,n), Au(n,γ), 235U(n,f) and 238U(n,f). Also in the evaluation process the 238U(n,γ) and 239Pu(n,f) cross sections that are not standards were evaluated. Evaluations were also obtained for data that are not traditional standards: the Maxwellian spectrum averaged cross section for the Au(n,γ) cross section at 30 keV; reference cross sections for prompt γ-ray production in fast neutron-induced reactions; reference cross sections for very high energy fission cross sections; the 252Cf spontaneous fission neutron spectrum and the 235U prompt fission neutron spectrum induced by thermal incident neutrons; and the thermal neutron constants. The data and covariance matrices of the uncertainties were obtained directly from the evaluation procedure.

  19. Intelligent interface design and evaluation

    NASA Technical Reports Server (NTRS)

    Greitzer, Frank L.

    1988-01-01

    Intelligent interface concepts and systematic approaches to assessing their functionality are discussed. Four general features of intelligent interfaces are described: interaction efficiency, subtask automation, context sensitivity, and use of an appropriate design metaphor. Three evaluation methods are discussed: Functional Analysis, Part-Task Evaluation, and Operational Testing. Design and evaluation concepts are illustrated with examples from a prototype expert system interface for environmental control and life support systems for manned space platforms.

  20. Microgravity Workstation and Restraint Evaluations

    NASA Technical Reports Server (NTRS)

    Chmielewski, C.; Whitmore, M.; Mount, F.

    1999-01-01

    Confined workstations, where the operator has limited visibility and physical access to the work area, may cause prolonged periods of unnatural posture. Impacts on performance, in terms of fatigue and posture, may occur especially if the task is tedious and repetitive or requires static muscle loading. The glovebox design is a good example of the confined workstation concept. Within the scope of the 'Microgravity Workstation and Restraint Evaluation' project, funded by the NASA Headquarters Life Sciences Division, it was proposed to conduct a series of evaluations in ground, KC-135 and Shuttle environments to investigate the human factors issues concerning confined/unique workstations, such as gloveboxes, and also including crew restraint requirements. As part of the proposed integrated evaluations, two Shuttle Detailed Supplementary Objectives (DSOs) were manifested; one on Space Transportation System (STS)-90 and one on STS-88. The DSO on STS-90 evaluated use of the General Purpose Workstation (GPWS). The STS-88 mission was planned to evaluate a restraint system at the Remote Manipulator System (RMS). In addition, KC- 1 35 flights were conducted to investigate user/workstation/restraint integration for long-duration microgravity use. The scope of these evaluations included workstations and restraints to be utilized in the ISS environment, but also incorporated other workstations/ restraints in an attempt to provide findings/requirements with broader applications across multiple programs (e.g., Shuttle, ISS, and future Lunar-Mars programs). In addition, a comprehensive electronic questionnaire has been prepared and is under review by the Astronaut Office which will compile crewmembers' lessons learned information concerning glovebox and restraint use following their missions. These evaluations were intended to be complementary and were coordinated with hardware developers, users (crewmembers), and researchers. This report is intended to provide a summary of the

  1. Research status and evaluation system of heat source evaluation method for central heating

    NASA Astrophysics Data System (ADS)

    Sun, Yutong; Qi, Junfeng; Cao, Yi

    2018-02-01

    The central heating boiler room is a regional heat source heating center. It is also a kind of the urban environment pollution, it is an important section of building energy efficiency. This article through to the evaluation method of central heating boiler room and overviews of the researches during domestic and overseas, summarized the main influence factors affecting energy consumption of industrial boiler under the condition of stable operation. According to the principle of establishing evaluation index system. We can find that is great significance in energy saving and environmental protection for the content of the evaluation index system of the centralized heating system.

  2. Conducting Program Evaluation with Hispanics in Rural Settings: Ethical Issues and Evaluation Challenges

    ERIC Educational Resources Information Center

    Loi, Claudia X. Aguado; McDermott, Robert J.

    2010-01-01

    Conducting evaluations that are both valid and ethical is imperative for the support and sustainability of programs that address underserved and vulnerable populations. A key component is to have evaluators who are knowledgeable about relevant cultural issues and sensitive to population needs. Hispanics in rural settings are vulnerable for many…

  3. Content Analysis of Evaluation Instruments Used for Student Evaluation of Classroom Teaching Performance in Higher Education.

    ERIC Educational Resources Information Center

    Tagomori, Harry T.; Bishop, Laurence A.

    A major argument against evaluation of teacher performance by students pertains to the instruments being used. Colleges conduct instructional evaluation using instruments they devise, borrow, adopt, or adapt from other institutions. Whether these instruments are tested for content validity is unknown. This study determined how evaluation questions…

  4. Making Superintendent Evaluation Fun?

    ERIC Educational Resources Information Center

    Vranish, Paul L.

    2011-01-01

    The evaluation of a superintendent is often a stressful, unpleasant experience for both the superintendent and the members of the school board. Typical board trustees have little experience in evaluating CEOs. Worse yet, they are hamstrung by the limitations inherent to their roles. They lack the advantage of a day-to-day working relationship with…

  5. Evaluation of Written Language.

    ERIC Educational Resources Information Center

    Hillerich, Robert L.

    An evaluation procedure was formulated to ascertain the effectiveness of an emphasis on the clarity and interest appeal of a composition as opposed to its mechanical correctness in improving a child's written expression. A random sample of themes were submitted to a general evaluation of content by six criteria and a linguistic analysis by nine…

  6. Swine: Selection and Evaluation.

    ERIC Educational Resources Information Center

    Clemson Univ., SC. Vocational Education Media Center.

    Designed for secondary vocational agriculture students, this text provides an overview of selecting and evaluating swine in Future Farmers of America livestock judging events. The first of four major sections addresses topics such as the main points in evaluating market hogs and breeding swine and provides an example class of swine. Section 2,…

  7. The Logic of Evaluation.

    ERIC Educational Resources Information Center

    Welty, Gordon A.

    The logic of the evaluation of educational and other action programs is discussed from a methodological viewpoint. However, no attempt is made to develop methods of evaluating programs. In Part I, the structure of an educational program is viewed as a system with three components--inputs, transformation of inputs into outputs, and outputs. Part II…

  8. Project Pride Evaluation Report.

    ERIC Educational Resources Information Center

    Jennewein, Marilyn; And Others

    Project PRIDE (Probe, Research, Inquire, Discover, and Evaluate) is evaluated in this report to provide data to be used as a learning tool for project staff and student participants. Major objectives of the project are to provide an inter-disciplinary, objective approach to the study of the American heritage, and to incorporate methods and…

  9. An evaluation method for nanoscale wrinkle

    NASA Astrophysics Data System (ADS)

    Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.

    2016-06-01

    In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.

  10. Evaluation of the FORETELL consortium operational test : weather information for surface transportation, evaluation strategy

    DOT National Transportation Integrated Search

    1998-07-01

    The purpose of the independent evaluation is to assess the effectiveness of the FORETELL Program in achieving certain ARTS goals and objectives. Independent evaluations of ITS Operational Tests require a well documented structured approach to ensure ...

  11. Dissimilar metals joint evaluation

    NASA Technical Reports Server (NTRS)

    Wakefield, M. E.; Apodaca, L. E.

    1974-01-01

    Dissimilar metals tubular joints between 2219-T851 aluminum alloy and 304L stainless steel were fabricated and tested to evaluate bonding processes. Joints were fabricated by four processes: (1) inertia (friction) weldings, where the metals are spun and forced together to create the weld; (2) explosive welding, where the metals are impacted together at high velocity; (3) co-extrusion, where the metals are extruded in contact at high temperature to promote diffusion; and (4) swaging, where residual stresses in the metals after a stretching operation maintain forced contact in mutual shear areas. Fifteen joints of each type were prepared and evaluated in a 6.35 cm (2.50 in.) O.D. size, with 0.32 cm (0.13 in.) wall thickness, and 7.6 cm (3.0 in) total length. The joints were tested to evaluate their ability to withstand pressure cycle, thermal cycle, galvanic corrosion and burst tests. Leakage tests and other non-destructive test techniques were used to evaluate the behavior of the joints, and the microstructure of the bond areas was analyzed.

  12. The Urban Intensive Land-use Evaluation in Xi’an, Based on Fuzzy Comprehensive Evaluation

    NASA Astrophysics Data System (ADS)

    Shi, Ru; Kang, Zhiyuan

    2018-01-01

    The intensive land-use is the basis of urban “stock optimization”, and scientific and reasonable evaluation is the important content of the land-intensive utilization. In this paper, through the survey of Xi’an urban land-use condition, we construct the suitable evaluation index system of Xi’an’ intensive land-use, by using Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE) of combination. And through the analysis of the influencing factors of land-intensive utilization, we provide a reference for the future development direction.

  13. An Empirical Examination of Validity in Evaluation

    ERIC Educational Resources Information Center

    Peck, Laura R.; Kim, Yushim; Lucio, Joanna

    2012-01-01

    This study addresses validity issues in evaluation that stem from Ernest R. House's book, "Evaluating With Validity". The authors examine "American Journal of Evaluation" articles from 1980 to 2010 that report the results of policy and program evaluations. The authors classify these evaluations according to House's "major approaches" typology…

  14. Microcomputers: Software Evaluation. Evaluation Guides. Guide Number 17.

    ERIC Educational Resources Information Center

    Gray, Peter J.

    This guide discusses three critical steps in selecting microcomputer software and hardware: setting the context, software evaluation, and managing microcomputer use. Specific topics addressed include: (1) conducting an informal task analysis to determine how the potential user's time is spent; (2) identifying tasks amenable to computerization and…

  15. Disability Evaluation in Japan

    PubMed Central

    2009-01-01

    To examine the current state and social ramifications of disability evaluation in Japan, public data from Annual Reports on Health and Welfare 1998-1999 were investigated. All data were analyzed based on the classification of disabilities and the effects of age-appropriate welfare services, which have been developed through a half-century of legislative efforts to support disability evaluation. These data suggest that disability evaluation, while essentially affected by age and impairment factors at a minimum, was impacted more by the assistive environment for disabilities. The assistive environment was found to be closely linked with the welfare support system related to a global assessment in the field of community-based rehabilitation. PMID:19503677

  16. Evaluation of accountability measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cacic, C.G.

    The New Brunswick Laboratory (NBL) is programmatically responsible to the U.S. Department of Energy (DOE) Office of Safeguards and Security (OSS) for providing independent review and evaluation of accountability measurement technology in DOE nuclear facilities. This function is addressed in part through the NBL Safegaurds Measurement Evaluation (SME) Program. The SME Program utilizes both on-site review of measurement methods along with material-specific measurement evaluation studies to provide information concerning the adequacy of subject accountability measurements. This paper reviews SME Program activities for the 1986-87 time period, with emphasis on noted improvements in measurement capabilities. Continued evolution of the SME Programmore » to respond to changing safeguards concerns is discussed.« less

  17. A School Evaluation Policy with a Dual Character: Evaluating the School Evaluation Policy in Hong Kong from the Perspective of Curriculum Leaders

    ERIC Educational Resources Information Center

    Yeung, Sze Yin Shirley

    2012-01-01

    This article reports research conducted recently into evaluation policy. The research comprises two parts: a questionnaire survey and qualitative interviews. Drawing from data collected in a survey of 65 curriculum leaders and interviews with 12 from the group, the article discusses how school evaluation policy functions to help make schools…

  18. Evaluating a Moving Target: Lessons Learned from Using Practical Participatory Evaluation (P-PE) in Hospital Settings

    ERIC Educational Resources Information Center

    Wharton, Tracy; Alexander, Neil

    2013-01-01

    This article describes lessons learned about implementing evaluations in hospital settings. In order to overcome the methodological dilemmas inherent in this environment, we used a practical participatory evaluation (P-PE) strategy to engage as many stakeholders as possible in the process of evaluating a clinical demonstration project.…

  19. Clinical Evaluation of Baccalaureate Nursing Students Using SBAR Format: Faculty versus Self Evaluation

    ERIC Educational Resources Information Center

    Saied, Hala; James, Joemol; Singh, Evangelin Jeya; Al Humaied, Lulawah

    2016-01-01

    Clinical training is of paramount importance in nursing education and clinical evaluation is one of the most challenging responsibilities of nursing faculty. The use of objective tools and criteria and involvement of the students in the evaluation process are some techniques to facilitate quality learning in the clinical setting. Aim: The aim of…

  20. Can You Increase Teacher Engagement with Evaluation Simply by Improving the Evaluation System?

    ERIC Educational Resources Information Center

    Moskal, Adon C. M.; Stein, Sarah J.; Golding, Clinton

    2016-01-01

    We know various factors can influence how teaching staff engage with student evaluation, such as institutional policies or staff beliefs. However, little research has investigated the influence of the technical processes of an evaluation system. In this article, we present a case study of the effects of changing the technical system for…

  1. AMEE Education Guide no. 29: evaluating educational programmes.

    PubMed

    Goldie, John

    2006-05-01

    Evaluation has become an applied science in its own right in the last 40 years. This guide reviews the history of programme evaluation through its initial concern with methodology, giving way to concern with the context of evaluation practice and into the challenge of fitting evaluation results into highly politicized and decentralized systems. It provides a framework for potential evaluators considering undertaking evaluation. The role of the evaluator; the ethics of evaluation; choosing the questions to be asked; evaluation design, including the dimensions of evaluation and the range of evaluation approaches available to guide evaluators; interpreting and disseminating the findings; and influencing decision making are covered.

  2. High-speed road profile equipment evaluation.

    DOT National Transportation Integrated Search

    1966-01-01

    The importance of evaluating the relative smoothness of pavements is well recognized in the highway profession. In the past, however, this evaluation has been largely a matter of : qualitative judgment. Such evaluations are useful in serviceability-p...

  3. The case for applying an early-lifecycle technology evaluation methodology to comparative evaluation of requirements engineering research

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.

    2003-01-01

    The premise of this paper is taht there is a useful analogy between evaluation of proposed problem solutions and evaluation of requirements engineering research itself. Both of these application areas face the challenges of evaluation early in the lifecycle, of the need to consider a wide variety of factors, and of the need to combine inputs from multiple stakeholders in making thse evaluation and subsequent decisions.

  4. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  5. Evaluation, or Just Data Collection? An Exploration of the Evaluation Practice of Selected UK Environmental Educators

    ERIC Educational Resources Information Center

    West, Sarah Elizabeth

    2015-01-01

    Little is known about the evaluation practices of environmental educators. Questionnaires and discussion groups with a convenience sample of UK-based practitioners were used to uncover their evaluation methods. Although many report that they are evaluating regularly, this is mainly monitoring numbers of participants or an assessment of enjoyment.…

  6. Evaluation of the Neutron Data Standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, A. D.; Pronyaev, V. G.; Capote, R.

    With the need for improving existing nuclear data evaluations, (e.g., ENDF/B-VIII.0 and JEFF-3.3 releases) the first step was to evaluate the standards for use in such a library. This new standards evaluation made use of improved experimental data and some developments in the methodology of analysis and evaluation. In addition to the work on the traditional standards, this work produced the extension of some energy ranges and includes new reactions that are called reference cross sections. Since the effort extends beyond the traditional standards, it is called the neutron data standards evaluation. This international effort has produced new evaluations ofmore » the following cross section standards: the H(n,n), 6Li(n,t), 10B(n,α), 10B(n,α 1γ), natC(n,n), Au(n,γ), 235U(n,f) and 238U(n,f). Also in the evaluation process the 238U(n,γ) and 239Pu(n,f) cross sections that are not standards were evaluated. Evaluations were also obtained for data that are not traditional standards: the Maxwellian spectrum averaged cross section for the Au(n,γ) cross section at 30 keV; reference cross sections for prompt γ-ray production in fast neutron-induced reactions; reference cross sections for very high energy fission cross sections; the 252Cf spontaneous fission neutron spectrum and the 235U prompt fission neutron spectrum induced by thermal incident neutrons; and the thermal neutron constants. The data and covariance matrices of the uncertainties were obtained directly from the evaluation procedure.« less

  7. Evaluation of the Neutron Data Standards

    DOE PAGES

    Carlson, A. D.; Pronyaev, V. G.; Capote, R.; ...

    2018-02-01

    With the need for improving existing nuclear data evaluations, (e.g., ENDF/B-VIII.0 and JEFF-3.3 releases) the first step was to evaluate the standards for use in such a library. This new standards evaluation made use of improved experimental data and some developments in the methodology of analysis and evaluation. In addition to the work on the traditional standards, this work produced the extension of some energy ranges and includes new reactions that are called reference cross sections. Since the effort extends beyond the traditional standards, it is called the neutron data standards evaluation. This international effort has produced new evaluations ofmore » the following cross section standards: the H(n,n), 6Li(n,t), 10B(n,α), 10B(n,α 1γ), natC(n,n), Au(n,γ), 235U(n,f) and 238U(n,f). Also in the evaluation process the 238U(n,γ) and 239Pu(n,f) cross sections that are not standards were evaluated. Evaluations were also obtained for data that are not traditional standards: the Maxwellian spectrum averaged cross section for the Au(n,γ) cross section at 30 keV; reference cross sections for prompt γ-ray production in fast neutron-induced reactions; reference cross sections for very high energy fission cross sections; the 252Cf spontaneous fission neutron spectrum and the 235U prompt fission neutron spectrum induced by thermal incident neutrons; and the thermal neutron constants. The data and covariance matrices of the uncertainties were obtained directly from the evaluation procedure.« less

  8. Improving evaluation at two medical schools.

    PubMed

    Schiekirka-Schwake, Sarah; Dreiling, Katharina; Pyka, Katharina; Anders, Sven; von Steinbüchel, Nicole; Raupach, Tobias

    2017-08-03

    Student evaluations of teaching can provide useful feedback for teachers and programme coordinators alike. We have designed a novel evaluation tool assessing teacher performance and student learning outcome. This tool was implemented at two German medical schools. In this article, we report student and teacher perceptions of the novel tool, and the implementation process. Focus group discussions as well as one-to-one interviews involving 22 teachers and 31 undergraduate medical students were conducted. Following adjustments to the feedback reports (e.g. the colour coding of results) at one medical school, 42 teachers were asked about their perceptions of the revised report and the personal benefit of the evaluation tool. Teachers appreciated the individual feedback provided by the evaluation tool and stated that they wanted to improve their teaching, based on the results; however, they missed most of the preparative communication. Students were unsure about the additional benefit of the instrument compared with traditional evaluation tools. A majority was unwilling to complete evaluation forms in their spare time, and some felt that the new questionnaire was too long and that the evaluations occurred too often. They were particularly interested in feedback on how their comments have helped to further improve teaching. Student evaluations of teaching can provide useful feedback CONCLUSION: Despite evidence of the utility of the tool for individual teachers, implementation of changes to the process of evaluation appears to have been suboptimal, mainly owing to a perceived lack of communication. In order to motivate students to provide evaluation data, feedback loops including aims and consequences should be established. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  9. Evaluating nursing administration instruments.

    PubMed

    Huber, D L; Maas, M; McCloskey, J; Scherb, C A; Goode, C J; Watson, C

    2000-05-01

    To identify and evaluate available measures that can be used to examine the effects of management innovations in five important areas: autonomy, conflict, job satisfaction, leadership, and organizational climate. Management interventions target the context in which care is delivered and through which evidence for practice diffuses. These innovations need to be evaluated for their effects on desired outcomes. However, busy nurses may not have the time to locate, evaluate, and select instruments to measure expected nursing administration outcomes without research-based guidance. Multiple and complex important contextual variables need psychometrically sound and easy-to-use measurement instruments identified for use in both practice and research. An expert focus group consensus methodology was used in this evaluation research to review available instruments in the five areas and evaluate which of these instruments are psychometrically sound and easy to use in the practice setting. The result is a portfolio of measures, clustered by concept and displayed on a spreadsheet. Retrieval information is provided. The portfolio includes the expert consensus judgment as well as useful descriptive information. The research reported here identifies psychometrically sound and easy-to-use instruments for measuring five key variables to be included in a portfolio. The results of this study can be used as a beginning for saving time in instrument selection and as an aid for determining the best instrument for measuring outcomes from a clinical or management intervention.

  10. Psychobiological responses to critically evaluated multitasking.

    PubMed

    Wetherell, Mark A; Craw, Olivia; Smith, Kenny; Smith, Michael A

    2017-12-01

    In order to understand psychobiological responses to stress it is necessary to observe how people react to controlled stressors. A range of stressors exist for this purpose; however, laboratory stressors that are representative of real life situations provide more ecologically valid opportunities for assessing stress responding. The current study assessed psychobiological responses to an ecologically valid laboratory stressor involving multitasking and critical evaluation. The stressor elicited significant increases in psychological and cardiovascular stress reactivity; however, no cortisol reactivity was observed. Other socially evaluative laboratory stressors that lead to cortisol reactivity typically require a participant to perform tasks that involve verbal responses, whilst standing in front of evaluative others. The current protocol contained critical evaluation of cognitive performance; however, this was delivered from behind a seated participant. The salience of social evaluation may therefore be related to the response format of the task and the method of evaluation. That is, the current protocol did not involve the additional vulnerability associated with in person, face-to-face contact, and verbal delivery. Critical evaluation of multitasking provides an ecologically valid technique for inducing laboratory stress and provides an alternative tool for assessing psychological and cardiovascular reactivity. Future studies could additionally use this paradigm to investigate those components of social evaluation necessary for eliciting a cortisol response.

  11. Veterinary and human vaccine evaluation methods

    PubMed Central

    Knight-Jones, T. J. D.; Edmond, K.; Gubbins, S.; Paton, D. J.

    2014-01-01

    Despite the universal importance of vaccines, approaches to human and veterinary vaccine evaluation differ markedly. For human vaccines, vaccine efficacy is the proportion of vaccinated individuals protected by the vaccine against a defined outcome under ideal conditions, whereas for veterinary vaccines the term is used for a range of measures of vaccine protection. The evaluation of vaccine effectiveness, vaccine protection assessed under routine programme conditions, is largely limited to human vaccines. Challenge studies under controlled conditions and sero-conversion studies are widely used when evaluating veterinary vaccines, whereas human vaccines are generally evaluated in terms of protection against natural challenge assessed in trials or post-marketing observational studies. Although challenge studies provide a standardized platform on which to compare different vaccines, they do not capture the variation that occurs under field conditions. Field studies of vaccine effectiveness are needed to assess the performance of a vaccination programme. However, if vaccination is performed without central co-ordination, as is often the case for veterinary vaccines, evaluation will be limited. This paper reviews approaches to veterinary vaccine evaluation in comparison to evaluation methods used for human vaccines. Foot-and-mouth disease has been used to illustrate the veterinary approach. Recommendations are made for standardization of terminology and for rigorous evaluation of veterinary vaccines. PMID:24741009

  12. Evaluating models of healthcare delivery using the Model of Care Evaluation Tool (MCET).

    PubMed

    Hudspeth, Randall S; Vogt, Marjorie; Wysocki, Ken; Pittman, Oralea; Smith, Susan; Cooke, Cindy; Dello Stritto, Rita; Hoyt, Karen Sue; Merritt, T Jeanne

    2016-08-01

    Our aim was to provide the outcome of a structured Model of Care (MoC) Evaluation Tool (MCET), developed by an FAANP Best-practices Workgroup, that can be used to guide the evaluation of existing MoCs being considered for use in clinical practice. Multiple MoCs are available, but deciding which model of health care delivery to use can be confusing. This five-component tool provides a structured assessment approach to model selection and has universal application. A literature review using CINAHL, PubMed, Ovid, and EBSCO was conducted. The MCET evaluation process includes five sequential components with a feedback loop from component 5 back to component 3 for reevaluation of any refinements. The components are as follows: (1) Background, (2) Selection of an MoC, (3) Implementation, (4) Evaluation, and (5) Sustainability and Future Refinement. This practical resource considers an evidence-based approach to use in determining the best model to implement based on need, stakeholder considerations, and feasibility. ©2015 American Association of Nurse Practitioners.

  13. Evaluation of the Brownfields Program

    EPA Pesticide Factsheets

    The Evaluation of 2003-2008 Brownfields Assessment, Revolving Loan Fund, and Cleanup Grants is the first national program evaluation of the outcomes, efficiencies, and economic benefits produced by Brownfields grants.

  14. Managing Tensions between Evaluation and Research: Illustrative Cases of Developmental Evaluation in the Context of Research

    ERIC Educational Resources Information Center

    Rey, Lynda; Tremblay, Marie-Claude; Brousselle, Astrid

    2014-01-01

    Developmental evaluation (DE), essentially conceptualized by Patton over the past 30 years, is a promising evaluative approach intended to support social innovation and the deployment of complex interventions. Its use is often justified by the complex nature of the interventions being evaluated and the need to produce useful results in real time.…

  15. Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture

    DOEpatents

    Muller, George; Perkins, Casey J.; Lancaster, Mary J.; MacDonald, Douglas G.; Clements, Samuel L.; Hutton, William J.; Patrick, Scott W.; Key, Bradley Robert

    2015-07-28

    Computer-implemented security evaluation methods, security evaluation systems, and articles of manufacture are described. According to one aspect, a computer-implemented security evaluation method includes accessing information regarding a physical architecture and a cyber architecture of a facility, building a model of the facility comprising a plurality of physical areas of the physical architecture, a plurality of cyber areas of the cyber architecture, and a plurality of pathways between the physical areas and the cyber areas, identifying a target within the facility, executing the model a plurality of times to simulate a plurality of attacks against the target by an adversary traversing at least one of the areas in the physical domain and at least one of the areas in the cyber domain, and using results of the executing, providing information regarding a security risk of the facility with respect to the target.

  16. Classification and Quality Evaluation of Tobacco Leaves Based on Image Processing and Fuzzy Comprehensive Evaluation

    PubMed Central

    Zhang, Fan; Zhang, Xinhong

    2011-01-01

    Most of classification, quality evaluation or grading of the flue-cured tobacco leaves are manually operated, which relies on the judgmental experience of experts, and inevitably limited by personal, physical and environmental factors. The classification and the quality evaluation are therefore subjective and experientially based. In this paper, an automatic classification method of tobacco leaves based on the digital image processing and the fuzzy sets theory is presented. A grading system based on image processing techniques was developed for automatically inspecting and grading flue-cured tobacco leaves. This system uses machine vision for the extraction and analysis of color, size, shape and surface texture. Fuzzy comprehensive evaluation provides a high level of confidence in decision making based on the fuzzy logic. The neural network is used to estimate and forecast the membership function of the features of tobacco leaves in the fuzzy sets. The experimental results of the two-level fuzzy comprehensive evaluation (FCE) show that the accuracy rate of classification is about 94% for the trained tobacco leaves, and the accuracy rate of the non-trained tobacco leaves is about 72%. We believe that the fuzzy comprehensive evaluation is a viable way for the automatic classification and quality evaluation of the tobacco leaves. PMID:22163744

  17. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance. [76 FR 58155, Sept. 20, 2011] ...

  18. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance. [76 FR 58155, Sept. 20, 2011] ...

  19. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance. [76 FR 58155, Sept. 20, 2011] ...

  20. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. Prepare a separate performance evaluation after... familiar with the architect-engineer contractor's performance. [76 FR 58155, Sept. 20, 2011] ...

  1. Evaluating Recommendation Systems

    NASA Astrophysics Data System (ADS)

    Shani, Guy; Gunawardana, Asela

    Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the properties that they evaluate.

  2. Evaluation of NASA space grant consortia programs

    NASA Technical Reports Server (NTRS)

    Eisenberg, Martin A.

    1990-01-01

    The meaningful evaluation of the NASA Space Grant Consortium and Fellowship Programs must overcome unusual difficulties: (1) the program, in its infancy, is undergoing dynamic change; (2) the several state consortia and universities have widely divergent parochial goals that defy a uniform evaluative process; and (3) the pilot-sized consortium programs require that the evaluative process be economical in human costs less the process of evaluation comprise the effectiveness of the programs they are meant to assess. This paper represents an attempt to assess the context in which evaluation is to be conducted, the goals and limitations inherent to the evaluation, and to recommend appropriate guidelines for evaluation.

  3. Operational Area Environmental Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey-White, Brenda Eileen; Nagy, Michael David; Wagner, Katrina Marie

    The Operational Area Environmental Evaluation update provides a description of activities that have the potential to adversely affect natural and cultural resources, including soil, air, water, biological, ecological, and historical resources. The environmental sensitivity of an area is evaluated and summarized, which may facilitate informed management decisions as to where development may be prohibited, restricted, or subject to additional requirements.

  4. Discounting in Economic Evaluations.

    PubMed

    Attema, Arthur E; Brouwer, Werner B F; Claxton, Karl

    2018-05-19

    Appropriate discounting rules in economic evaluations have received considerable attention in the literature and in national guidelines for economic evaluations. Rightfully so, as discounting can be quite influential on the outcomes of economic evaluations. The most prominent controversies regarding discounting involve the basis for and height of the discount rate, whether costs and effects should be discounted at the same rate, and whether discount rates should decline or stay constant over time. Moreover, the choice for discount rules depends on the decision context one adopts as the most relevant. In this article, we review these issues and debates, and describe and discuss the current discounting recommendations of the countries publishing their national guidelines. We finish the article by proposing a research agenda.

  5. Economic Evaluation of Health IT.

    PubMed

    Luzi, Daniela; Pecoraro, Fabrizio; Tamburis, Oscar

    2016-01-01

    Economic evaluation in health care supports decision makers in prioritizing interventions and maximizing the available limited resources for social benefits. Health Information Technology (health IT) constitutes a promising strategy to improve the quality and delivery of health care. However, to determine whether the appropriate health IT solution has been selected in a specific health context, its impact on the clinical and organizational process, on costs, on user satisfaction as well as on patient outcomes, a rigorous and multidimensional evaluation analysis is necessary. Starting from the principles of evaluation introduced since the mid-1980s within the Health Technology Assessment (HTA) guidelines, this contribution provides an overview of the main challenging issues related to the complex task of performing an economic evaluation of health IT. A set of necessary key principles to deliver a proper design and implementation of a multidimensional economic evaluation study is described, focusing in particular on the classification of costs and outcomes as well as on the type of economic analysis to be performed. A case study is eventually described to show how the key principles introduced are applied.

  6. Evaluating Behavioral Health Surveillance Systems.

    PubMed

    Azofeifa, Alejandro; Stroup, Donna F; Lyerla, Rob; Largo, Thomas; Gabella, Barbara A; Smith, C Kay; Truman, Benedict I; Brewer, Robert D; Brener, Nancy D

    2018-05-10

    In 2015, more than 27 million people in the United States reported that they currently used illicit drugs or misused prescription drugs, and more than 66 million reported binge drinking during the previous month. Data from public health surveillance systems on drug and alcohol abuse are crucial for developing and evaluating interventions to prevent and control such behavior. However, public health surveillance for behavioral health in the United States has been hindered by organizational issues and other factors. For example, existing guidelines for surveillance evaluation do not distinguish between data systems that characterize behavioral health problems and those that assess other public health problems (eg, infectious diseases). To address this gap in behavioral health surveillance, we present a revised framework for evaluating behavioral health surveillance systems. This system framework builds on published frameworks and incorporates additional attributes (informatics capabilities and population coverage) that we deemed necessary for evaluating behavioral health-related surveillance. This revised surveillance evaluation framework can support ongoing improvements to behavioral health surveillance systems and ensure their continued usefulness for detecting, preventing, and managing behavioral health problems.

  7. Evaluation of EIT system performance.

    PubMed

    Yasin, Mamatjan; Böhm, Stephan; Gaggero, Pascal O; Adler, Andy

    2011-07-01

    An electrical impedance tomography (EIT) system images internal conductivity from surface electrical stimulation and measurement. Such systems necessarily comprise multiple design choices from cables and hardware design to calibration and image reconstruction. In order to compare EIT systems and study the consequences of changes in system performance, this paper describes a systematic approach to evaluate the performance of the EIT systems. The system to be tested is connected to a saline phantom in which calibrated contrasting test objects are systematically positioned using a position controller. A set of evaluation parameters are proposed which characterize (i) data and image noise, (ii) data accuracy, (iii) detectability of single contrasts and distinguishability of multiple contrasts, and (iv) accuracy of reconstructed image (amplitude, resolution, position and ringing). Using this approach, we evaluate three different EIT systems and illustrate the use of these tools to evaluate and compare performance. In order to facilitate the use of this approach, all details of the phantom, test objects and position controller design are made publicly available including the source code of the evaluation and reporting software.

  8. Evaluating Behavioral Health Surveillance Systems

    PubMed Central

    Azofeifa, Alejandro; Lyerla, Rob; Largo, Thomas; Gabella, Barbara A.; Smith, C. Kay; Truman, Benedict I.; Brewer, Robert D.; Brener, Nancy D.

    2018-01-01

    In 2015, more than 27 million people in the United States reported that they currently used illicit drugs or misused prescription drugs, and more than 66 million reported binge drinking during the previous month. Data from public health surveillance systems on drug and alcohol abuse are crucial for developing and evaluating interventions to prevent and control such behavior. However, public health surveillance for behavioral health in the United States has been hindered by organizational issues and other factors. For example, existing guidelines for surveillance evaluation do not distinguish between data systems that characterize behavioral health problems and those that assess other public health problems (eg, infectious diseases). To address this gap in behavioral health surveillance, we present a revised framework for evaluating behavioral health surveillance systems. This system framework builds on published frameworks and incorporates additional attributes (informatics capabilities and population coverage) that we deemed necessary for evaluating behavioral health–related surveillance. This revised surveillance evaluation framework can support ongoing improvements to behavioral health surveillance systems and ensure their continued usefulness for detecting, preventing, and managing behavioral health problems. PMID:29752804

  9. Crescent Evaluation : appendix B : state case study evaluation report

    DOT National Transportation Integrated Search

    1994-02-01

    The state case study evaluation approach uniquely captured an understanding of the potential of such a system by documenting the experiences, issues, and opportunities of selected key state government personnel from a cross-section of involved agenci...

  10. Building a community-based culture of evaluation.

    PubMed

    Janzen, Rich; Ochocka, Joanna; Turner, Leanne; Cook, Tabitha; Franklin, Michelle; Deichert, Debbie

    2017-12-01

    In this article we argue for a community-based approach as a means of promoting a culture of evaluation. We do this by linking two bodies of knowledge - the 70-year theoretical tradition of community-based research and the trans-discipline of program evaluation - that are seldom intersected within the evaluation capacity building literature. We use the three hallmarks of a community-based research approach (community-determined; equitable participation; action and change) as a conceptual lens to reflect on a case example of an evaluation capacity building program led by the Ontario Brian Institute. This program involved two community-based groups (Epilepsy Southwestern Ontarioand the South West Alzheimer Society Alliance) who were supported by evaluators from the Centre for Community Based Research to conduct their own internal evaluation. The article provides an overview of a community-based research approach and its link to evaluation. It then describes the featured evaluation capacity building initiative, including reflections by the participating organizations themselves. We end by discussing lessons learned and their implications for future evaluation capacity building. Our main argument is that organizations that strive towards a community-based approach to evaluation are well placed to build and sustain a culture of evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Evaluating Alternative High Schools: Program Evaluation in Action

    ERIC Educational Resources Information Center

    Hinds, Drew Samuel Wayne

    2013-01-01

    Alternative high schools serve some of the most vulnerable students and their programs present a significant challenge to evaluate. Determining the impact of an alternative high school that serves mostly at-risk students presented a significant research problem. Few studies exist that dig deeper into the characteristics and strategies of…

  12. Commercial Vehicle Technology Evaluation Publications | Transportation

    Science.gov Websites

    Research | NREL Commercial Vehicle Technology Evaluation Publications Commercial Vehicle Technology Evaluation Publications NREL publishes technical reports, fact sheets, and other documents about its fleet evaluation activities: Hybrid electric vehicle publications Electric and plug-in hybrid

  13. Dairy cattle genomics evaluation program update

    USDA-ARS?s Scientific Manuscript database

    Implementation of genomic evaluation has caused profound changes in dairy cattle breeding. All young bulls bought by major artificial-insemination organizations now are selected based on these evaluation. Evaluation reliability can reach ~75% for yield traits, which is adequate for marketing semen o...

  14. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  15. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  16. 48 CFR 236.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. (a) Preparation of performance reports. Use DD Form 2631, Performance Evaluation (Architect-Engineer), instead of SF 1421. (2) Prepare a separate...

  17. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  18. 48 CFR 36.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  19. Mass Chain Evaluation for A=95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basu, S.K.; Sonzogni, A.; Basu, Swapan Kr.

    2011-08-01

    A full evaluation of the mass chain A = 95 has been done in the ENSDF format taking into account all the available data until June 2009. Excited states populated by in-beam nuclear reactions and by radioactive decay have been considered. The 'evp' editor, developed at the NNDC, has been used for the evaluation. This mass chain was last evaluated in 1993. Many new and improved data were reported since then. A total of 13 nuclei have been evaluated.

  20. Evaluating ethics competence in medical education.

    PubMed Central

    Savulescu, J; Crisp, R; Fulford, K W; Hope, T

    1999-01-01

    We critically evaluate the ways in which competence in medical ethics has been evaluated. We report the initial stage in the development of a relevant, reliable and valid instrument to evaluate core critical thinking skills in medical ethics. This instrument can be used to evaluate the impact of medical ethics education programmes and to assess whether medical students have achieved a satisfactory level of performance of core skills and knowledge in medical ethics, within and across institutions. PMID:10536759

  1. Review: evaluating information systems in nursing.

    PubMed

    Oroviogoicoechea, Cristina; Elliott, Barbara; Watson, Roger

    2008-03-01

    To review existing nursing research on inpatient hospitals' information technology (IT) systems in order to explore new approaches for evaluation research on nursing informatics to guide further design and implementation of effective IT systems. There has been an increase in the use of IT and information systems in nursing in recent years. However, there has been little evaluation of these systems and little guidance on how they might be evaluated. A literature review was conducted between 1995 and 2005 inclusive using CINAHL and Medline and the search terms 'nursing information systems', 'clinical information systems', 'hospital information systems', 'documentation', 'nursing records', 'charting'. Research in nursing information systems was analysed and some deficiencies and contradictory results were identified which impede a comprehensive understanding of effective implementation. There is a need for IT systems to be understood from a wider perspective that includes aspects related to the context where they are implemented. Social and organizational aspects need to be considered in evaluation studies and realistic evaluation can provide a framework for the evaluation of information systems in nursing. The rapid introduction of IT systems for clinical practice urges evaluation of already implemented systems examining how and in what circumstances they work to guide effective further development and implementation of IT systems to enhance clinical practice. Evaluation involves more factors than just involving technologies such as changing attitudes, cultures and healthcare practices. Realistic evaluation could provide configurations of context-mechanism-outcomes that explain the underlying relationships to understand why and how a programme or intervention works.

  2. Evaluation of preventive maintenance treatments.

    DOT National Transportation Integrated Search

    2012-04-01

    Scrub seals were placed in 2007 in Tallahatchie, Marshall, Carroll and Grenada Counties to evaluate their effectiveness and feasibility as preventive maintenance treatments. Condition data was collected and evaluated on the project sections.

  3. Guardrail installation noise level evaluation

    DOT National Transportation Integrated Search

    1999-06-01

    The Oregon Department of Transportation (ODOT) Environmental Services Unit evaluates the impacts of noise and mitigation of noise issues. ODOT currently requires noise level evaluation for proposed construction projects when threatened or endangered ...

  4. An Evaluation of Non-Formal Education in Ecuador. Volume 2: Overview and Evaluation Plan. Final Report.

    ERIC Educational Resources Information Center

    Laosa, Luis M.; And Others

    As the second volume in a 4-volume evaluation report on the University of Massachusetts Non-Formal Education Project (UMass NFEP) in rural Ecuador, this volume details the evaluation design. Cited as basic to the evaluation design are questions which ask: (1) What kinds of effects (changes) can be observed? and (2) What are characteristics of the…

  5. Meta-evaluation of published studies on evaluation of health disaster preparedness exercises through a systematic review.

    PubMed

    Sheikhbardsiri, Hojjat; Yarmohammadian, Mohammad H; Khankeh, Hamid Reza; Nekoei-Moghadam, Mahmoud; Raeisi, Ahmad Reza

    2018-01-01

    Exercise evaluation is one of the most important steps and sometimes neglected in designing and taking exercises, in this stage of exercise, it systematically identifying, gathering, and interpreting related information to indicate how an exercise has fulfilled its objectives. The present study aimed to assess the most important evaluation techniques applied in evaluating health exercises for emergencies and disasters. This was meta-evaluation study through a systematic review. In this research, we searched papers based on specific and relevant keywords in research databases including ISI web of science, PubMed, Scopus, Science Direct, Ovid, ProQuest, Wiley, Google Scholar, and Persian database such as ISC and SID. The search keywords and strategies are followed; "simulation," "practice," "drill," "exercise," "instrument," "tool," "questionnaire," " measurement," "checklist," "scale," "test," "inventory," "battery," "evaluation," "assessment," "appraisal," "emergency," "disaster," "cricise," "hazard," "catastrophe,: "hospital", "prehospital," "health centers," "treatment centers," were used in combination with Boolean operators OR and AND. The research findings indicate that there are different techniques and methods for data collection to evaluate performance exercises of health centers and affiliated organizations in disasters and emergencies including debriefing inventories, self-report, questionnaire, interview, observation, shooting video, and photographing, electronic equipment which can be individually or collectively used depending on exercise objectives or purposes. Taking exercise in the health sector is one of the important steps in preparation and implementation of disaster risk management programs. This study can be thus utilized to improve preparedness of different sectors of health system according to the latest available evaluation techniques and methods for better implementation of disaster exercise evaluation stages.

  6. Lessons learned about collaborative evaluation using the Capacity for Applying Project Evaluation (CAPE) framework with school and district leaders.

    PubMed

    Corn, Jenifer O; Byrom, Elizabeth; Knestis, Kirk; Matzen, Nita; Thrift, Beth

    2012-11-01

    Schools, districts, and state-level educational organizations are experiencing a great shift in the way they do the business of education. This shift focuses on accountability, specifically through the expectation of the effective utilization of evaluative-focused efforts to guide and support decisions about educational program implementation. In as much, education leaders need specific guidance and training on how to plan, implement, and use evaluation to critically examine district and school-level initiatives. One specific effort intended to address this need is through the Capacity for Applying Project Evaluation (CAPE) framework. The CAPE framework is composed of three crucial components: a collection of evaluation resources; a professional development model; and a conceptual framework that guides the work to support evaluation planning and implementation in schools and districts. School and district teams serve as active participants in the professional development and ultimately as formative evaluators of their own school or district-level programs by working collaboratively with evaluation experts. The CAPE framework involves the school and district staff in planning and implementing their evaluation. They are the ones deciding what evaluation questions to ask, which instruments to use, what data to collect, and how and to whom results should be reported. Initially this work is done through careful scaffolding by evaluation experts, where supports are slowly pulled away as the educators gain experience and confidence in their knowledge and skills as evaluators. Since CAPE engages all stakeholders in all stages of the evaluation, the philosophical intentions of these efforts to build capacity for formative evaluation strictly aligns with the collaborative evaluation approach. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. 48 CFR 225.504 - Evaluation examples.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Evaluation examples. 225.504 Section 225.504 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... 225.504 Evaluation examples. For examples that illustrate the evaluation procedures in 225.502(c)(ii...

  8. 48 CFR 1315.606-2 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Evaluation. 1315.606-2... CONTRACT TYPES CONTRACTING BY NEGOTIATION Unsolicited Proposals 1315.606-2 Evaluation. (a) If the... 15.606-1, the contracting officer will acknowledge receipt of the proposal, coordinate evaluation...

  9. 48 CFR 1215.606-2 - Evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Evaluation. 1215.606-2... AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Unsolicited Proposals 1215.606-2 Evaluation. (a) Comprehensive evaluations should be completed within sixty calendar days after making the initial review...

  10. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  11. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  12. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  13. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  14. 10 CFR 420.36 - Evaluation criteria.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Evaluation criteria. 420.36 Section 420.36 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION STATE ENERGY PROGRAM Implementation of Special Projects Financial Assistance § 420.36 Evaluation criteria. The evaluation criteria, including program activity-specific...

  15. Some Methods for Evaluating Program Implementation.

    ERIC Educational Resources Information Center

    Hardy, Roy A.

    An approach to evaluating program implementation is described. This approach includes the development of a project description which includes a structure matrix, sampling from the structure matrix, and preparing an implementation evaluation plan. The implementation evaluation plan should include: (1) verification of implementation of planned…

  16. 48 CFR 25.504 - Evaluation Examples.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... PROGRAMS FOREIGN ACQUISITION Evaluating Foreign Offers-Supply Contracts 25.504 Evaluation Examples. The... examples assume that the contracting officer has eliminated all offers that are unacceptable for reasons... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Evaluation Examples. 25...

  17. Program Evaluation: A Review and Synthesis.

    ERIC Educational Resources Information Center

    Webber, Charles F.

    This paper reviews models of program evaluation. Major topics and issues found in the evaluation literature include quantitative versus qualitative approaches, identification and involvement of stakeholders, formulation of research questions, collection of data, analysis and interpretation of data, reporting of results, evaluation utilization, and…

  18. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Performance evaluations. 304.4 Section 304.4 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management...

  19. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  20. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  1. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management standards, financial accountability and program performance of each District Organization within three (3...

  2. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management standards, financial accountability and program performance of each District Organization within three (3...

  3. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  4. 13 CFR 304.4 - Performance evaluations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management standards, financial accountability and program performance of each District Organization within three (3...

  5. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  6. 48 CFR 2936.604 - Performance evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor performance as required in FAR 36.604. Normally, the performance report must be prepared by the contracting...

  7. Evaluation of an Eating Disorder Curriculum.

    ERIC Educational Resources Information Center

    Moriarty, Dick; And Others

    1990-01-01

    A qualitative and quantitative evaluation of "A Preventive Curriculum for Anorexia Nervosa and Bulimia" is reported. The evaluation, which included teachers, researchers, health professionals, and students, included development of the curriculum as well as pilot testing activities. The curriculum development and evaluation consisted of…

  8. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Project evaluation. 470.317 Section 470... MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of the...

  9. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Project evaluation. 470.317 Section 470... MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of the...

  10. Ergonomics: CTD management evaluation tool.

    PubMed

    Ostendorf, J S; Rogers, B; Bertsche, P K

    2000-01-01

    Cumulative trauma disorder (CTD) occurrences peaked in number in 1994 and although decreasing in 1995, still accounted for 62% of all illness cases reported. A CTD Management Evaluation Tool was developed to assist Occupational Safety and Health Compliance Officers (CSHOs) in program evaluation and documentation of the occupational health management component and the need for an ergonomics program. Occupational and environmental health nurses may use the tool not only to reduce and prevent CTD occurrences, but also as a benchmark for program evaluation.

  11. 2017 Guralp Affinity Digitizer Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.

    Sandia National Laboratories has tested and evaluated two Guralp Affinity digitizers. The Affinity digitizers are intended to record sensor output for seismic and infrasound monitoring applications. The purpose of this digitizer evaluation is to measure the performance characteristics in such areas as power consumption, input impedance, sensitivity, full scale, self- noise, dynamic range, system noise, response, passband, and timing. The Affinity digitizers are being evaluated for potential use in the International Monitoring System (IMS) of the Comprehensive Nuclear Test-Ban-Treaty Organization (CTBTO).

  12. Social Actions Education Evaluation Program.

    DTIC Science & Technology

    1986-04-01

    ID-R1EE ?36 SOCIAL . ACTIONS EDUCATION EVALUATION P*OMRA(U) AIR v CONMAN AND STAFF COLL NAXUELL SF3 AL J L SKIDMORE APR 96 RCSC-S6-2215 UNCLRSSIFIED F...34 : AND STAFF COLLEGE -: [ -. STUDENT REPORT- TC SOCIAL ACTIONS EDUCATION - 1 $-EVALUATION PROGRAM D=TLC Lliii! MAIOR OANL.SIMMAND-35 IAP3 36 0- MAOR...must be included with any reproduced or adapted portions of this -_ document. 7-. Pk ~REPORT NUMBER 86-2315 TIT LE SOCIAL ACTIONS EDUCATION EVALUATION

  13. EVALUATION GUIDELINES FOR ECOLOGICAL INDICATORS

    EPA Science Inventory

    This document presents fifteen technical guidelines to evaluate the suitability of an ecological indicator for a particular monitoring program. The guidelines are organized within four evaluation phrases: conceptual relevance, feasibility of implementation, response variability...

  14. ITS evaluation -- phase 3 (2010)

    DOT National Transportation Integrated Search

    2011-05-01

    This report documents the results of applying a previously developed, standardized approach for : evaluating intelligent transportation systems (ITS) projects to 17 ITS earmark projects. The evaluation : approach was based on a questionnaire to inves...

  15. Policy evaluation and democracy: Do they fit?

    PubMed

    Sager, Fritz

    2017-08-05

    The papers assembled in this special issue shed light on the question of the interrelation between democracy and policy evaluation by discussing research on the use of evaluations in democratic processes. The collection makes a case for a stronger presence of evaluation in democracy beyond expert utilization. Parliamentarians prove to be more aquainted with evaluations than expected and the inclusion of evaluations in policy arguments increases the deliberative quality of democratic campaigns. In sum, evaluation and democracy turn out to be well compatible after all. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Evaluating Computational Gene Ontology Annotations.

    PubMed

    Škunca, Nives; Roberts, Richard J; Steffen, Martin

    2017-01-01

    Two avenues to understanding gene function are complementary and often overlapping: experimental work and computational prediction. While experimental annotation generally produces high-quality annotations, it is low throughput. Conversely, computational annotations have broad coverage, but the quality of annotations may be variable, and therefore evaluating the quality of computational annotations is a critical concern.In this chapter, we provide an overview of strategies to evaluate the quality of computational annotations. First, we discuss why evaluating quality in this setting is not trivial. We highlight the various issues that threaten to bias the evaluation of computational annotations, most of which stem from the incompleteness of biological databases. Second, we discuss solutions that address these issues, for example, targeted selection of new experimental annotations and leveraging the existing experimental annotations.

  17. Quantitative evaluations of ankle spasticity and stiffness in neurological disorders using manual spasticity evaluator.

    PubMed

    Peng, Qiyu; Park, Hyung-Soon; Shah, Parag; Wilson, Nicole; Ren, Yupeng; Wu, Yi-Ning; Liu, Jie; Gaebler-Spira, Deborah J; Zhang, Li-Qun

    2011-01-01

    Spasticity and contracture are major sources of disability in people with neurological impairments that have been evaluated using various instruments: the Modified Ashworth Scale, tendon reflex scale, pendulum test, mechanical perturbations, and passive joint range of motion (ROM). These measures generally are either convenient to use in clinics but not quantitative or they are quantitative but difficult to use conveniently in clinics. We have developed a manual spasticity evaluator (MSE) to evaluate spasticity/contracture quantitatively and conveniently, with ankle ROM and stiffness measured at a controlled low velocity and joint resistance and Tardieu catch angle measured at several higher velocities. We found that the Tardieu catch angle was linearly related to the velocity, indicating that increased resistance at higher velocities was felt at further stiffer positions and, thus, that the velocity dependence of spasticity may also be position-dependent. This finding indicates the need to control velocity in spasticity evaluation, which is achieved with the MSE. Quantitative measurements of spasticity, stiffness, and ROM can lead to more accurate characterizations of pathological conditions and outcome evaluations of interventions, potentially contributing to better healthcare services for patients with neurological disorders such as cerebral palsy, spinal cord injury, traumatic brain injury, and stroke.

  18. Evaluability Assessment: A Retrospective Illustration and Review.

    ERIC Educational Resources Information Center

    Smith, Nick L.

    1981-01-01

    Rutman's view of evaluability assessment is reviewed, evaluation planning activities are illustrated via flow diagram for a large educational evaluation designed to increase citizen participation in local school activities, and some of the limitations of Rutman's evaluability procedures are outlined. (RL)

  19. A qualitative case study of evaluation use in the context of a collaborative program evaluation strategy in Burkina Faso.

    PubMed

    D'Ostie-Racine, Léna; Dagenais, Christian; Ridde, Valéry

    2016-05-26

    Program evaluation is widely recognized in the international humanitarian sector as a means to make interventions and policies more evidence based, equitable, and accountable. Yet, little is known about the way humanitarian non-governmental organizations (NGOs) actually use evaluations. The current qualitative evaluation employed an instrumental case study design to examine evaluation use (EU) by a humanitarian NGO based in Burkina Faso. This organization developed an evaluation strategy in 2008 to document the implementation and effects of its maternal and child healthcare user fee exemption program. Program evaluations have been undertaken ever since, and the present study examined the discourses of evaluation partners in 2009 (n = 15) and 2011 (n = 17). Semi-structured individual interviews and one group interview were conducted to identify instances of EU over time. Alkin and Taut's (Stud Educ Eval 29:1-12, 2003) conceptualization of EU was used as the basis for thematic qualitative analyses of the different forms of EU identified by stakeholders of the exemption program in the two data collection periods. Results demonstrated that stakeholders began to understand and value the utility of program evaluations once they were exposed to evaluation findings and then progressively used evaluations over time. EU was manifested in a variety of ways, including instrumental and conceptual use of evaluation processes and findings, as well as the persuasive use of findings. Such EU supported planning, decision-making, program practices, evaluation capacity, and advocacy. The study sheds light on the many ways evaluations can be used by different actors in the humanitarian sector. Conceptualizations of EU are also critically discussed.

  20. Evaluation of horizontal curve design

    DOT National Transportation Integrated Search

    1980-08-01

    This report documents an initial evaluation of horizontal curve design criteria which involved two phases: an observational study and an analytical evaluation. Three classes of vehicles (automobiles, school buses and tractor semi-trailers) and three ...

  1. Pragmatism, Evidence, and Mixed Methods Evaluation

    ERIC Educational Resources Information Center

    Hall, Jori N.

    2013-01-01

    Mixed methods evaluation has a long-standing history of enhancing the credibility of evaluation findings. However, using mixed methods in a utilitarian way implicitly emphasizes convenience over engaging with its philosophical underpinnings (Denscombe, 2008). Because of this, some mixed methods evaluators and social science researchers have been…

  2. Evaluation of Environmental Education in Schools.

    ERIC Educational Resources Information Center

    Connect, 1984

    1984-01-01

    This newsletter discusses the evaluation of environmental education (EE) in schools, highlighting an introductory chapter of a proposed Unesco-United Nations environmental program guide on evaluating such programs. The benefits of evaluating an EE program (including program improvement, growth in student learning, better environment, and program…

  3. 1 CFR 500.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Self-evaluation. 500.110 Section 500.110... PROGRAMS OR ACTIVITIES CONDUCTED BY THE NATIONAL COMMISSION FOR EMPLOYMENT POLICY § 500.110 Self-evaluation... or organizations representing handicapped persons, to participate in the self-evaluation process by...

  4. 49 CFR 1014.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Self-evaluation. 1014.110 Section 1014.110... HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE SURFACE TRANSPORTATION BOARD § 1014.110 Self-evaluation... or organizations representing handicapped persons, to participate in the self-evaluation process by...

  5. 1 CFR 457.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Self-evaluation. 457.110 Section 457.110... PROGRAMS OR ACTIVITIES CONDUCTED BY THE NATIONAL CAPITAL PLANNING COMMISSION § 457.110 Self-evaluation. (a... organizations representing handicapped persons, to participate in the self-evaluation process by submitting...

  6. 22 CFR 1510.110 - Self-evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Self-evaluation. 1510.110 Section 1510.110... IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE AFRICAN DEVELOPMENT FOUNDATION § 1510.110 Self-evaluation... handicaps or organizations representing individuals with handicaps, to participate in the self-evaluation...

  7. 16 CFR 6.110 - Self-evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 1 2012-01-01 2012-01-01 false Self-evaluation. 6.110 Section 6.110 Commercial Practices FEDERAL TRADE COMMISSION ORGANIZATION, PROCEDURES AND RULES OF PRACTICE ENFORCEMENT OF....110 Self-evaluation. (a) The Commission shall, by February 1, 1989, evaluate its current policies and...

  8. 49 CFR 1014.110 - Self-evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... or organizations representing handicapped persons, to participate in the self-evaluation process by... 49 Transportation 8 2014-10-01 2014-10-01 false Self-evaluation. 1014.110 Section 1014.110... HANDICAP IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE SURFACE TRANSPORTATION BOARD § 1014.110 Self-evaluation...

  9. 49 CFR 807.110 - Self-evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 7 2013-10-01 2013-10-01 false Self-evaluation. 807.110 Section 807.110... TRANSPORTATION SAFETY BOARD § 807.110 Self-evaluation. (a) The agency shall, by April 9, 1987, evaluate its... interested persons, including handicapped persons or organizations representing handicapped persons, to...

  10. 40 CFR 12.110 - Self-evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... organizations representing individuals with handicaps to, participate in the self-evaluation process by... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Self-evaluation. 12.110 Section 12.110... IN PROGRAMS OR ACTIVITIES CONDUCTED BY THE ENVIRONMENTAL PROTECTION AGENCY § 12.110 Self-evaluation...

  11. 22 CFR 219.110 - Self-evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Self-evaluation. 219.110 Section 219.110... INTERNATIONAL DEVELOPMENT § 219.110 Self-evaluation. (a) The agency shall, by April 9, 1987, evaluate its... interested persons, including handicapped persons or organizations representing handicapped persons, to...

  12. 16 CFR 6.110 - Self-evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Self-evaluation. 6.110 Section 6.110 Commercial Practices FEDERAL TRADE COMMISSION ORGANIZATION, PROCEDURES AND RULES OF PRACTICE ENFORCEMENT OF....110 Self-evaluation. (a) The Commission shall, by February 1, 1989, evaluate its current policies and...

  13. 16 CFR 6.110 - Self-evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 1 2014-01-01 2014-01-01 false Self-evaluation. 6.110 Section 6.110 Commercial Practices FEDERAL TRADE COMMISSION ORGANIZATION, PROCEDURES AND RULES OF PRACTICE ENFORCEMENT OF....110 Self-evaluation. (a) The Commission shall, by February 1, 1989, evaluate its current policies and...

  14. Independent Evaluation: Insights from Public Accounting

    ERIC Educational Resources Information Center

    Brown, Abigail B.; Klerman, Jacob Alex

    2012-01-01

    Background: Maintaining the independence of contract government program evaluation presents significant contracting challenges. The ideal outcome for an agency is often both the impression of an independent evaluation "and" a glowing report. In this, independent evaluation is like financial statement audits: firm management wants both a public…

  15. 7 CFR 4290.370 - Evaluation criteria.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Evaluation criteria. 4290.370 Section 4290.370... Evaluation and Selection of RBICs § 4290.370 Evaluation criteria. Of those Applicants whose management team... following criteria— (a) Whether the Applicant's management team has the knowledge, experience, and...

  16. 76 FR 789 - Office of Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-06

    ... this agenda. The Committee will provide advice regarding future research efforts to inform HHS about... Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation AGENCY... for Head Start Research and Evaluation. General Function of Committee: The Advisory Committee for Head...

  17. 76 FR 17130 - Office of Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-28

    ... Impact Study fits within this agenda. The Committee will provide advice regarding future research efforts... Planning, Research and Evaluation Advisory Committee on Head Start Research and Evaluation AGENCY... for Head Start Research and Evaluation. General Function of Committee: The Advisory Committee for Head...

  18. Improving Students’ Evaluation of Informal Arguments

    PubMed Central

    LARSON, AARON A.; BRITT, M. ANNE; KURBY, CHRISTOPHER A.

    2010-01-01

    Evaluating the structural quality of arguments is a skill important to students’ ability to comprehend the arguments of others and produce their own. The authors examined college and high school students’ ability to evaluate the quality of 2-clause (claim-reason) arguments and tested a tutorial to improve this ability. These experiments indicated that college and high school students had difficulty evaluating arguments on the basis of their quality. Experiments 1 and 2 showed that a tutorial explaining skills important to overall argument evaluation increased performance but that immediate feedback during training was necessary for teaching students to evaluate the claim-reason connection. Using a Web-based version of the tutorial, Experiment 3 extended this finding to the performance of high-school students. The study suggests that teaching the structure of an argument and teaching students to pay attention to the precise message of the claim can improve argument evaluation. PMID:20174611

  19. Personnel Evaluation: Noncommissioned Officer Evaluation Reporting System

    DTIC Science & Technology

    2002-05-15

    Maintenance System), paper copies will be maintained in state, command, or local career manage- ment individual files ( CMIF ) such as AGR management...Routine use DA Form 2166-8 will be maintained in the rated NCO’s official military personnel file (OMPF) and career manage- ment individual file ( CMIF ). A...CAR Chief, Army Reserve CDR commander CE commander’s evaluation CG commanding general CMIF career management individual file CNGB Chief, National Guard

  20. Image quality evaluation of full reference algorithm

    NASA Astrophysics Data System (ADS)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.